亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We analyze physical spacings between locations of safety rest areas on interstates in the United States. We show normalized safety rest area spacings on major interstates exhibit Wigner surmise statistics, which align with the eigenvalue spacings for the Gaussian Unitary Ensemble from random matrix theory as well as the one-dimensional gas interactions via the Coulomb potential. We identify economic and geographic regional traits at the state level that exhibit Poissonian statistics, which become more pronounced with increased geographical obstacles in interstate travel. Other regional filters (e.g., historical or political) produced results that did not diverge substantially from the overall Wigner surmise model.

相關內容

面向服務的前后端通信標準 Not React

Even though virtual testing of Autonomous Vehicles (AVs) has been well recognized as essential for safety assessment, AV simulators are still undergoing active development. One particularly challenging question is to effectively include the Sensing and Perception (S&P) subsystem into the simulation loop. In this article, we define Perception Error Models (PEM), a virtual simulation component that can enable the analysis of the impact of perception errors on AV safety, without the need to model the sensors themselves. We propose a generalized data-driven procedure towards parametric modeling and evaluate it using Apollo, an open-source driving software, and nuScenes, a public AV dataset. Additionally, we implement PEMs in SVL, an open-source vehicle simulator. Furthermore, we demonstrate the usefulness of PEM-based virtual tests, by evaluating camera, LiDAR, and camera-LiDAR setups. Our virtual tests highlight limitations in the current evaluation metrics, and the proposed approach can help study the impact of perception errors on AV safety.

The Laplace eigenvalue problem on circular sectors has eigenfunctions with corner singularities. Standard methods may produce suboptimal approximation results. To address this issue, a novel numerical algorithm that enhances standard isogeometric analysis is proposed in this paper by using a single-patch graded mesh refinement scheme. Numerical tests demonstrate optimal convergence rates for both the eigenvalues and eigenfunctions. Furthermore, the results show that smooth splines possess a superior approximation constant compared to their $C^0$-continuous counterparts for the lower part of the Laplace spectrum. This is an extension of previous findings about excellent spectral approximation properties of smooth splines on rectangular domains to circular sectors. In addition, graded meshes prove to be particularly advantageous for an accurate approximation of a limited number of eigenvalues. The novel algorithm applied here has a drawback in the singularity of the isogeometric parameterization. It results in some basis functions not belonging to the solution space of the corresponding weak problem, which is considered a variational crime. However, the approach proves to be robust. Finally, a hierarchical mesh structure is presented to avoid anisotropic elements, omit redundant degrees of freedom and keep the number of basis functions contributing to the variational crime constant, independent of the mesh size. Numerical results validate the effectiveness of hierarchical mesh grading for the simulation of eigenfunctions with and without corner singularities.

The safety of Large Language Models (LLMs) has gained increasing attention in recent years, but there still lacks a comprehensive approach for detecting safety issues within LLMs' responses in an aligned, customizable and explainable manner. In this paper, we propose ShieldLM, an LLM-based safety detector, which aligns with general human safety standards, supports customizable detection rules, and provides explanations for its decisions. To train ShieldLM, we compile a large bilingual dataset comprising 14,387 query-response pairs, annotating the safety of responses based on various safety standards. Through extensive experiments, we demonstrate that ShieldLM surpasses strong baselines across four test sets, showcasing remarkable customizability and explainability. Besides performing well on standard detection datasets, ShieldLM has also been shown to be effective in real-world situations as a safety evaluator for advanced LLMs. We release ShieldLM at \url{//github.com/thu-coai/ShieldLM} to support accurate and explainable safety detection under various safety standards, contributing to the ongoing efforts to enhance the safety of LLMs.

The safety alignment of Large Language Models (LLMs) is vulnerable to both manual and automated jailbreak attacks, which adversarially trigger LLMs to output harmful content. However, current methods for jailbreaking LLMs, which nest entire harmful prompts, are not effective at concealing malicious intent and can be easily identified and rejected by well-aligned LLMs. This paper discovers that decomposing a malicious prompt into separated sub-prompts can effectively obscure its underlying malicious intent by presenting it in a fragmented, less detectable form, thereby addressing these limitations. We introduce an automatic prompt \textbf{D}ecomposition and \textbf{R}econstruction framework for jailbreak \textbf{Attack} (DrAttack). DrAttack includes three key components: (a) `Decomposition' of the original prompt into sub-prompts, (b) `Reconstruction' of these sub-prompts implicitly by in-context learning with semantically similar but harmless reassembling demo, and (c) a `Synonym Search' of sub-prompts, aiming to find sub-prompts' synonyms that maintain the original intent while jailbreaking LLMs. An extensive empirical study across multiple open-source and closed-source LLMs demonstrates that, with a significantly reduced number of queries, DrAttack obtains a substantial gain of success rate over prior SOTA prompt-only attackers. Notably, the success rate of 78.0\% on GPT-4 with merely 15 queries surpassed previous art by 33.1\%.

Modern Out-of-Order (OoO) CPUs are complex systems with many components interleaved in non-trivial ways. Pinpointing performance bottlenecks and understanding the underlying causes of program performance issues are critical tasks to make the most of hardware resources. We provide an in-depth overview of performance bottlenecks in recent OoO microarchitectures and describe the difficulties of detecting them. Techniques that measure resources utilization can offer a good understanding of a program's execution, but, due to the constraints inherent to Performance Monitoring Units (PMU) of CPUs, do not provide the relevant metrics for each use case. Another approach is to rely on a performance model to simulate the CPU behavior. Such a model makes it possible to implement any new microarchitecture-related metric. Within this framework, we advocate for implementing modeled resources as parameters that can be varied at will to reveal performance bottlenecks. This allows a generalization of bottleneck analysis that we call sensitivity analysis. We present Gus, a novel performance analysis tool that combines the advantages of sensitivity analysis and dynamic binary instrumentation within a resource-centric CPU model. We evaluate the impact of sensitivity on bottleneck analysis over a set of high-performance computing kernels.

Passwords remain a widely-used authentication mechanism, despite their well-known security and usability limitations. To improve on this situation, next-generation authentication mechanisms, based on behavioral biometric factors such as eye movement and brainwave have emerged. However, their usability remains relatively under-explored. To fill this gap, we conducted an empirical user study (n=32 participants) to evaluate three brain-based and three eye-based authentication mechanisms, using both qualitative and quantitative methods. Our findings show good overall usability according to the System Usability Scale for both categories of mechanisms, with average SUS scores in the range of 78.6-79.6 and the best mechanisms rated with an "excellent" score. Participants particularly identified brainwave authentication as more secure yet more privacy-invasive and effort-intensive compared to eye movement authentication. However, the significant number of neutral responses indicates participants' need for more detailed information about the security and privacy implications of these authentication methods. Building on the collected evidence, we identify three key areas for improvement: privacy, authentication interface design, and verification time. We offer recommendations for designers and developers to improve the usability and security of next-generation authentication mechanisms.

In this study, we tackle a growing concern around the safety and ethical use of large language models (LLMs). Despite their potential, these models can be tricked into producing harmful or unethical content through various sophisticated methods, including 'jailbreaking' techniques and targeted manipulation. Our work zeroes in on a specific issue: to what extent LLMs can be led astray by asking them to generate responses that are instruction-centric such as a pseudocode, a program or a software snippet as opposed to vanilla text. To investigate this question, we introduce TechHazardQA, a dataset containing complex queries which should be answered in both text and instruction-centric formats (e.g., pseudocodes), aimed at identifying triggers for unethical responses. We query a series of LLMs -- Llama-2-13b, Llama-2-7b, Mistral-V2 and Mistral 8X7B -- and ask them to generate both text and instruction-centric responses. For evaluation we report the harmfulness score metric as well as judgements from GPT-4 and humans. Overall, we observe that asking LLMs to produce instruction-centric responses enhances the unethical response generation by ~2-38% across the models. As an additional objective, we investigate the impact of model editing using the ROME technique, which further increases the propensity for generating undesirable content. In particular, asking edited LLMs to generate instruction-centric responses further increases the unethical response generation by ~3-16% across the different models.

Knowing which countries contribute the most to pushing the boundaries of knowledge in science and technology has social and political importance. However, common citation metrics do not adequately measure this contribution. This measure requires more stringent metrics appropriate for the highly influential breakthrough papers that push the boundaries of knowledge, which are very highly cited but very rare. Here I used the recently described Rk index, specifically designed to address this issue. I applied this index to 25 countries and the EU across 10 key research topics, five technological and five biomedical, studying domestic and international collaborative papers independently. In technological topics, the Rk indices of domestic papers show that overall, the USA, China, and the EU are leaders; other countries are clearly behind. The USA is notably ahead of China, and the EU is far behind China. The same approach to biomedical topics shows an overwhelming dominance of the USA and that the EU is ahead of China. The analysis of internationally collaborative papers further demonstrates the US dominance. These results conflict with current country rankings based on less stringent indicators.

Bridge health monitoring becomes crucial with the deployment of IoT sensors. The challenge lies in securely storing vast amounts of data and extracting useful information to promptly identify unhealthy bridge conditions. To address this challenge, we propose BIONIB, wherein real-time IoT data is stored on the blockchain for monitoring bridges. One of the emerging blockchains, EOSIO is used because of its exceptional scaling capabilities for monitoring the health of bridges. The approach involves collecting data from IoT sensors and using an unsupervised machine learning-based technique called the Novelty Index (NI) to observe meaningful patterns in the data. Smart contracts of EOSIO are used in implementation because of their efficiency, security, and programmability, making them well-suited for handling complex transactions and automating processes within decentralized applications. BIONIB provides secure storage benefits of blockchain, as well as useful predictions based on the NI. Performance analysis uses real-time data collected from IoT sensors at the bridge in healthy and unhealthy states. The data is collected with extensive experimentation with different loads, climatic conditions, and the health of the bridge. The performance of BIONIB under varying numbers of sensors and various numbers of participating blockchain nodes is observed. We observe a tradeoff between throughput, latency, and computational resources. Storage efficiency can be increased by manifolds with a slight increase in latency caused by NI calculation. As latency is not a significant concern in bridge health applications, the results demonstrate that BIONIB has high throughput, parallel processing, and high security while efficiently scaled.

The recent proliferation of knowledge graphs (KGs) coupled with incomplete or partial information, in the form of missing relations (links) between entities, has fueled a lot of research on knowledge base completion (also known as relation prediction). Several recent works suggest that convolutional neural network (CNN) based models generate richer and more expressive feature embeddings and hence also perform well on relation prediction. However, we observe that these KG embeddings treat triples independently and thus fail to cover the complex and hidden information that is inherently implicit in the local neighborhood surrounding a triple. To this effect, our paper proposes a novel attention based feature embedding that captures both entity and relation features in any given entity's neighborhood. Additionally, we also encapsulate relation clusters and multihop relations in our model. Our empirical study offers insights into the efficacy of our attention based model and we show marked performance gains in comparison to state of the art methods on all datasets.

北京阿比特科技有限公司