亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We design a concept for an autonomous underground freight transport system for Hanover, Germany. To evaluate the resulting system changes in overall traffic flows from an environmental perspective, we carried out an agent-based traffic simulation with MATSim. Our simulations indicate comparatively low impacts on network-wide traffic volumes. Local CO2 emissions, on the other hand, could be reduced by up to 32 %. In total, the shuttle system can replace more than 18 % of the vehicles in use with conventional combustion engines. Thus, an autonomous underground freight transportation system can contribute to environmentally friendly and economical transportation of urban goods on the condition of cooperative use of the system.

相關內容

This paper develops structure-preserving, oscillation-eliminating discontinuous Galerkin (OEDG) schemes for ideal magnetohydrodynamics (MHD), as a sequel to our recent work [Peng, Sun, and Wu, OEDG: Oscillation-eliminating discontinuous Galerkin method for hyperbolic conservation laws, 2023]. The schemes are based on a locally divergence-free (LDF) oscillation-eliminating (OE) procedure to suppress spurious oscillations while maintaining many of the good properties of original DG schemes, such as conservation, local compactness, and optimal convergence rates. The OE procedure is built on the solution operator of a novel damping equation -- a simple linear ordinary differential equation (ODE) whose exact solution can be exactly formulated. Because this OE procedure does not interfere with DG spatial discretization and RK stage update, it can be easily incorporated to existing DG codes as an independent module. These features make the proposed LDF OEDG schemes highly efficient and easy to implement.In addition, we present a positivity-preserving (PP) analysis of the LDF OEDG schemes on Cartesian meshes via the optimal convex decomposition technique and the geometric quasi-linearization (GQL) approach. Efficient PP LDF OEDG schemes are obtained with the HLL flux under a condition accessible by the simple local scaling PP limiter.Several one- and two-dimensional MHD tests confirm the accuracy, effectiveness, and robustness of the proposed structure-preserving OEDG schemes.

Performance prediction has been a key part of the neural architecture search (NAS) process, allowing to speed up NAS algorithms by avoiding resource-consuming network training. Although many performance predictors correlate well with ground truth performance, they require training data in the form of trained networks. Recently, zero-cost proxies have been proposed as an efficient method to estimate network performance without any training. However, they are still poorly understood, exhibit biases with network properties, and their performance is limited. Inspired by the drawbacks of zero-cost proxies, we propose neural graph features (GRAF), simple to compute properties of architectural graphs. GRAF offers fast and interpretable performance prediction while outperforming zero-cost proxies and other common encodings. In combination with other zero-cost proxies, GRAF outperforms most existing performance predictors at a fraction of the cost.

The objective of this project is to utilize an FPGA board which is the CMOD A7 35t to obtain a pseudo random number which can be used for encryption. We aim to achieve this by leveraging the inherent randomness present in environmental data captured by sensors. This data will be used as a seed to initialize an algorithm implemented on the CMOD A7 35t FPGA board. The project will focus on interfacing the sensors with the FPGA and developing suitable algorithms to ensure the generated numbers exhibit strong randomness properties.

Large Language Models have demonstrated remarkable performance across various tasks, exhibiting the capacity to swiftly acquire new skills, such as through In-Context Learning (ICL) with minimal demonstration examples. In this work, we present a comprehensive framework for investigating Multimodal ICL (M-ICL) in the context of Large Multimodal Models. We consider the best open-source multimodal models (e.g., IDEFICS, OpenFlamingo) and a wide range of multimodal tasks. Our study unveils several noteworthy findings: (1) M-ICL primarily relies on text-driven mechanisms, showing little to no influence from the image modality. (2) When used with advanced-ICL strategy (like RICES), M-ICL is not better than a simple strategy based on majority voting over context examples. Moreover, we identify several biases and limitations of M-ICL that warrant consideration prior to deployment. Code available at //gitlab.com/folbaeni/multimodal-icl

The advent of Large Language Models (LLMs) has made a transformative impact. However, the potential that LLMs such as ChatGPT can be exploited to generate misinformation has posed a serious concern to online safety and public trust. A fundamental research question is: will LLM-generated misinformation cause more harm than human-written misinformation? We propose to tackle this question from the perspective of detection difficulty. We first build a taxonomy of LLM-generated misinformation. Then we categorize and validate the potential real-world methods for generating misinformation with LLMs. Then, through extensive empirical investigation, we discover that LLM-generated misinformation can be harder to detect for humans and detectors compared to human-written misinformation with the same semantics, which suggests it can have more deceptive styles and potentially cause more harm. We also discuss the implications of our discovery on combating misinformation in the age of LLMs and the countermeasures.

The development of digital humanities has opened new perspectives in the history of Islam: whether dealing with thin or sometimes vast source corpora (such as S\=ira, al-Tbar\=i, al-Dahab\=i, etc.), these tools allow us to approach texts much more effectively from a statistical perspective, in order to support more general hypotheses and move beyond case studies. Drawing on the work of a number of researchers as well as our own, we propose to study the potentialities and limitations posed by computer methods for the history of medieval Islam, keeping in mind that the conclusions may prove useful to researchers of other periods. We will particularly emphasize two tools: the construction and use of relational databases, and more recently, the tagging of sources, with the stated goal of addressing some of the problems posed by the previous method.

Non-convex optimization is ubiquitous in modern machine learning. Researchers devise non-convex objective functions and optimize them using off-the-shelf optimizers such as stochastic gradient descent and its variants, which leverage the local geometry and update iteratively. Even though solving non-convex functions is NP-hard in the worst case, the optimization quality in practice is often not an issue -- optimizers are largely believed to find approximate global minima. Researchers hypothesize a unified explanation for this intriguing phenomenon: most of the local minima of the practically-used objectives are approximately global minima. We rigorously formalize it for concrete instances of machine learning problems.

Compared with cheap addition operation, multiplication operation is of much higher computation complexity. The widely-used convolutions in deep neural networks are exactly cross-correlation to measure the similarity between input feature and convolution filters, which involves massive multiplications between float values. In this paper, we present adder networks (AdderNets) to trade these massive multiplications in deep neural networks, especially convolutional neural networks (CNNs), for much cheaper additions to reduce computation costs. In AdderNets, we take the $\ell_1$-norm distance between filters and input feature as the output response. The influence of this new similarity measure on the optimization of neural network have been thoroughly analyzed. To achieve a better performance, we develop a special back-propagation approach for AdderNets by investigating the full-precision gradient. We then propose an adaptive learning rate strategy to enhance the training procedure of AdderNets according to the magnitude of each neuron's gradient. As a result, the proposed AdderNets can achieve 74.9% Top-1 accuracy 91.7% Top-5 accuracy using ResNet-50 on the ImageNet dataset without any multiplication in convolution layer.

Language model pre-training has proven to be useful in learning universal language representations. As a state-of-the-art language model pre-training model, BERT (Bidirectional Encoder Representations from Transformers) has achieved amazing results in many language understanding tasks. In this paper, we conduct exhaustive experiments to investigate different fine-tuning methods of BERT on text classification task and provide a general solution for BERT fine-tuning. Finally, the proposed solution obtains new state-of-the-art results on eight widely-studied text classification datasets.

Attention is an increasingly popular mechanism used in a wide range of neural architectures. Because of the fast-paced advances in this domain, a systematic overview of attention is still missing. In this article, we define a unified model for attention architectures for natural language processing, with a focus on architectures designed to work with vector representation of the textual data. We discuss the dimensions along which proposals differ, the possible uses of attention, and chart the major research activities and open challenges in the area.

北京阿比特科技有限公司