亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We investigate the limiting behavior of the Navier-Stokes-Cahn-Hilliard model for binary-fluid flows as the diffuse-interface thickness passes to zero, in the presence of fluid-fluid-solid contact lines. Allowing for motion of such contact lines relative to the solid substrate is required to adequately model multi-phase and multi-species fluid transport past and through solid media. Even though diffuse-interface models provide an inherent slip mechanism through the mobility-induced diffusion, this slip vanishes as the interface thickness and mobility parameter tend to zero in the so-called sharp-interface limit. The objective of this work is to present dynamic wetting and generalized Navier boundary conditions for diffuse-interface models that are consistent in the sharp-interface limit. We concentrate our analysis on the prototypical binary-fluid Couette-flow problems. To verify the consistency of the diffuse-interface model in the limit of vanishing interface thickness, we provide reference limit solutions of a corresponding sharp-interface model. For parameter values both at and away from the critical viscosity ratio, we present and compare the results of both the diffuse- and sharp-interface models. The close match between both model results indicates that the considered test case lends itself well as a benchmark for further research.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · 語言模型化 · 可行 · 大語言模型 · Integration ·
2024 年 8 月 22 日

We propose a feasibility study for real-time automated data standardization leveraging Large Language Models (LLMs) to enhance seamless positioning systems in IoT environments. By integrating and standardizing heterogeneous sensor data from smartphones, IoT devices, and dedicated systems such as Ultra-Wideband (UWB), our study ensures data compatibility and improves positioning accuracy using the Extended Kalman Filter (EKF). The core components include the Intelligent Data Standardization Module (IDSM), which employs a fine-tuned LLM to convert varied sensor data into a standardized format, and the Transformation Rule Generation Module (TRGM), which automates the creation of transformation rules and scripts for ongoing data standardization. Evaluated in real-time environments, our study demonstrates adaptability and scalability, enhancing operational efficiency and accuracy in seamless navigation. This study underscores the potential of advanced LLMs in overcoming sensor data integration complexities, paving the way for more scalable and precise IoT navigation solutions.

The inherent diversity of computation types within the deep neural network (DNN) models often requires a variety of specialized units in hardware processors, which limits computational efficiency, increasing both inference latency and power consumption, especially when the hardware processor needs to support and execute different neural networks. In this study, we introduce NeuralMatrix, which elastically transforms the computations of entire DNNs into linear matrix operations. This transformation allows seamless execution of various DNN models all with matrix operations and paves the way for running versatile DNN models with a single General Matrix Multiplication (GEMM) accelerator.Extensive experiments with both CNN and transformer-based models demonstrate the potential of NeuralMatrix to accurately and efficiently execute a wide range of DNN models, achieving 2.17-38.72 times computation efficiency (i.e., throughput per power) compared to CPUs, GPUs, and SoC platforms. This level of efficiency is usually only attainable with the accelerator designed for a specific neural network.

We explore inference within sparse linear models, focusing on scenarios where both predictors and errors carry serial correlations. We establish a clear link between predictor serial correlation and the finite sample performance of the LASSO, showing that even orthogonal or weakly correlated stationary AR processes can lead to significant spurious correlations due to their serial correlations. To address this challenge, we propose a novel approach named ARMAr-LASSO (ARMA residuals LASSO), which applies the LASSO to predictor time series that have been pre-whitened with ARMA filters and lags of dependent variable. Utilizing the near-epoch dependence framework, we derive both asymptotic results and oracle inequalities for the ARMAr-LASSO, and demonstrate that it effectively reduces estimation errors while also providing an effective forecasting and feature selection strategy. Our findings are supported by extensive simulations and an application to real-world macroeconomic data, which highlight the superior performance of the ARMAr-LASSO for handling sparse linear models in the context of time series.

In-Context Learning (ICL) empowers Large Language Models (LLMs) with the capacity to learn in context, achieving downstream generalization without gradient updates but with a few in-context examples. Despite the encouraging empirical success, the underlying mechanism of ICL remains unclear, and existing research offers various viewpoints of understanding. These studies propose intuition-driven and ad-hoc technical solutions for interpreting ICL, illustrating an ambiguous road map. In this paper, we leverage a data generation perspective to reinterpret recent efforts and demonstrate the potential broader usage of popular technical solutions, approaching a systematic angle. For a conceptual definition, we rigorously adopt the terms of skill learning and skill recognition. The difference between them is skill learning can learn new data generation functions from in-context data. We also provide a comprehensive study on the merits and weaknesses of different solutions, and highlight the uniformity among them given the perspective of data generation, establishing a technical foundation for future research to incorporate the strengths of different lines of research.

We describe a novel multi-agent, multi-scale computational cognitive interaction model of instrument operations at the Linac Coherent Light Source (LCLS). A leading scientific user facility, LCLS is the world's first hard x-ray free electron laser, operated by the SLAC National Accelerator Laboratory for the U.S. Department of Energy. As the world's first x-ray free electron laser, LCLS is in high demand and heavily oversubscribed. Our overall project employs cognitive engineering methodologies to improve experimental efficiency and scientific productivity by refining experimental interfaces and workflows, simplifying tasks, reducing errors, and improving operator safety and stress levels. Our model simulates aspects of human cognition at multiple cognitive and temporal scales, ranging from seconds to hours, and among agents playing multiple roles, including instrument operator, real time data analyst, and experiment manager. The model can predict impacts stemming from proposed changes to operational interfaces and workflows. Because the model code is open source, and supplemental videos go into detail on all aspects of the model and results, this approach could be applied to other experimental apparatus and processes. Example results demonstrate the model's potential in guiding modifications to improve operational efficiency and scientific output. We discuss the implications of our findings for cognitive engineering in complex experimental settings and outline future directions for research.

The availability of Large Language Models (LLMs) which can generate code, has made it possible to create tools that improve developer productivity. Integrated development environments or IDEs which developers use to write software are often used as an interface to interact with LLMs. Although many such tools have been released, almost all of them focus on general-purpose programming languages. Domain-specific languages, such as those crucial for Information Technology (IT) automation, have not received much attention. Ansible is one such YAML-based IT automation-specific language. Ansible Lightspeed is an LLM-based service designed explicitly to generate Ansible YAML given natural language prompt. This paper first presents the design and implementation of the Ansible Lightspeed service. We then evaluate its utility to developers using diverse indicators, including extended utilization, analysis of user rejected suggestions, as well as analysis of user sentiments. The analysis is based on data collected for 10,696 real users including 3,910 returning users. The code for Ansible Lightspeed service and the analysis framework is made available for others to use. To our knowledge, our study is the first to involve thousands of users in evaluating code assistants for domain-specific languages. We propose an improved version of user acceptance rate and we are the first code completion tool to present N-Day user retention figures. With our findings we provide insights into the effectiveness of small, dedicated models in a domain-specific context. We hope this work serves as a reference for software engineering and machine learning researchers exploring code completion services for domain-specific languages in particular and programming languages in general.

The MiG-V was designed for high-security applications and is the first commercially available logic-locked RISC-V processor on the market. In this context logic locking was used to protect the RISC-V processor design during the untrusted manufacturing process by using key-driven logic gates to obfuscate the original design. Although this method defends against malicious modifications, such as hardware Trojans, logic locking's impact on the RISC-V processor's data confidentiality during runtime has not been thoroughly examined. In this study, we evaluate the impact of logic locking on data confidentiality. By altering the logic locking key of the MiG-V while running SSL cryptographic algorithms, we identify data leakages resulting from the exploitation of the logic locking hardware. We show that changing a single bit of the logic locking key can expose 100% of the cryptographic encryption key. This research reveals a critical security flaw in logic locking, highlighting the need for comprehensive security assessments beyond logic locking key-recovery attacks.

Sparse Neural Networks (SNNs) have emerged as powerful tools for efficient feature selection. Leveraging the dynamic sparse training (DST) algorithms within SNNs has demonstrated promising feature selection capabilities while drastically reducing computational overheads. Despite these advancements, several critical aspects remain insufficiently explored for feature selection. Questions persist regarding the choice of the DST algorithm for network training, the choice of metric for ranking features/neurons, and the comparative performance of these methods across diverse datasets when compared to dense networks. This paper addresses these gaps by presenting a comprehensive systematic analysis of feature selection with sparse neural networks. Moreover, we introduce a novel metric considering sparse neural network characteristics, which is designed to quantify feature importance within the context of SNNs. Our findings show that feature selection with SNNs trained with DST algorithms can achieve, on average, more than $50\%$ memory and $55\%$ FLOPs reduction compared to the dense networks, while outperforming them in terms of the quality of the selected features. Our code and the supplementary material are available on GitHub (\url{//github.com/zahraatashgahi/Neuron-Attribution}).

Routine computed tomography (CT) scans often detect a wide range of renal cysts, some of which may be malignant. Early and precise localization of these cysts can significantly aid quantitative image analysis. Current segmentation methods, however, do not offer sufficient interpretability at the feature and pixel levels, emphasizing the necessity for an explainable framework that can detect and rectify model inaccuracies. We developed an interpretable segmentation framework and validated it on a multi-centric dataset. A Variational Autoencoder Generative Adversarial Network (VAE-GAN) was employed to learn the latent representation of 3D input patches and reconstruct input images. Modifications in the latent representation using the gradient of the segmentation model generated counterfactual explanations for varying dice similarity coefficients (DSC). Radiomics features extracted from these counterfactual images, using a ground truth cyst mask, were analyzed to determine their correlation with segmentation performance. The DSCs for the original and VAE-GAN reconstructed images for counterfactual image generation showed no significant differences. Counterfactual explanations highlighted how variations in cyst image features influence segmentation outcomes and showed model discrepancies. Radiomics features correlating positively and negatively with dice scores were identified. The uncertainty of the predicted segmentation masks was estimated using posterior sampling of the weight space. The combination of counterfactual explanations and uncertainty maps provided a deeper understanding of the image features within the segmented renal cysts that lead to high uncertainty. The proposed segmentation framework not only achieved high segmentation accuracy but also increased interpretability regarding how image features impact segmentation performance.

Federated Learning (FL) allows devices to train a global machine learning model without sharing data. In the context of wireless networks, the inherently unreliable nature of the transmission channel introduces delays and errors that compromise the regularity of updating the global model. Furthermore, limited resources and energy consumption of devices are factors that affect FL performance. Therefore, this work proposes a new FL algorithm called FL-E2WS that considers both the requirements of federated training and a wireless network within the scope of the Internet of Things. To reduce the energy cost of devices, FL-E2WS schedules communication resources to allocate the ideal bandwidth and power for the transmission of models under certain device selection and uplink resource block allocation, meeting delay requirements, power consumption, and packet error rate. The simulation results demonstrate that FL-E2WS reduces energy consumption by up to 70.12% and enhances the accuracy of the global model by up to 10.21% compared to the FL algorithms that lacks transmission channel knowledge. Additionally, when compared to FL versions that scale communication resources, FL-E2WS achieves up to a 38.61% reduction in energy consumption and improves the accuracy of the global model by up to 1.61%.

北京阿比特科技有限公司