亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Marine robots, particularly Unmanned Surface Vessels (USVs), have gained considerable attention for their diverse applications in maritime tasks, including search and rescue, environmental monitoring, and maritime security. This paper presents the design and implementation of a USV named marine$\mathcal{X}$. The hardware components of marine$\mathcal{X}$ are meticulously developed to ensure robustness, efficiency, and adaptability to varying environmental conditions. Furthermore, the integration of a vision-based object tracking algorithm empowers marine$\mathcal{X}$ to autonomously track and monitor specific objects on the water surface. The control system utilizes PID control, enabling precise navigation of marine$\mathcal{X}$ while maintaining a desired course and distance to the target object. To assess the performance of marine$\mathcal{X}$, comprehensive testing is conducted, encompassing simulation, trials in the marine pool, and real-world tests in the open sea. The successful outcomes of these tests demonstrate the USV's capabilities in achieving real-time object tracking, showcasing its potential for various applications in maritime operations.

相關內容

 Surface 是微軟公司( )旗下一系列使用 Windows 10(早期為 Windows 8.X)操作系統的電腦產品,目前有 Surface、Surface Pro 和 Surface Book 三個系列。 2012 年 6 月 18 日,初代 Surface Pro/RT 由時任微軟 CEO 史蒂夫·鮑爾默發布于在洛杉磯舉行的記者會,2012 年 10 月 26 日上市銷售。

We study the edge-coloring problem in simple $n$-vertex $m$-edge graphs with maximum degree $\Delta$. This is one of the most classical and fundamental graph-algorithmic problems. Vizing's celebrated theorem provides $(\Delta+1)$-edge-coloring in $O(m\cdot n)$ deterministic time. This running time was improved to $O\left(m\cdot\min\left\{\Delta\cdot\log n, \sqrt{n}\right\}\right)$. It is also well-known that $3\left\lceil\frac{\Delta}{2}\right\rceil$-edge-coloring can be computed in $O(m\cdot\log\Delta)$ time deterministically. Duan et al. devised a randomized $(1+\varepsilon)\Delta$-edge-coloring algorithm with running time $O\left(m\cdot\frac{\log^6 n}{\varepsilon^2}\right)$. It was however open if there exists a deterministic near-linear time algorithm for this basic problem. We devise a simple deterministic $(1+\varepsilon)\Delta$-edge-coloring algorithm with running time $O\left(m\cdot\frac{\log n}{\varepsilon}\right)$. We also devise a randomized $(1+\varepsilon)\Delta$-edge-coloring algorithm with running time $O(m\cdot(\varepsilon^{-18}+\log(\varepsilon\cdot\Delta)))$. For $\varepsilon\geq\frac{1}{\log^{1/18}\Delta}$, this running time is $O(m\cdot\log\Delta)$.

In data-driven control and machine learning, a common requirement involves breaking down large matrices into smaller, low-rank factors that possess specific levels of sparsity. This paper introduces an innovative solution to the orthogonal nonnegative matrix factorization (ONMF) problem. The objective is to approximate input data by using two low-rank nonnegative matrices, adhering to both orthogonality and $\ell_0$-norm sparsity constraints. the proposed maximum-entropy-principle based framework ensures orthogonality and sparsity of features or the mixing matrix, while maintaining nonnegativity in both. Additionally, the methodology offers a quantitative determination of the ``true'' number of underlying features, a crucial hyperparameter for ONMF. Experimental evaluation on synthetic and a standard datasets highlights the method's superiority in terms of sparsity, orthogonality, and computational speed compared to existing approaches. Notably, the proposed method achieves comparable or improved reconstruction errors in line with the literature.

This paper presents exact formulas for the probability distribution function (PDF) and moment generating function (MGF) of the sum-product of statistically independent but not necessarily identically distributed (i.n.i.d.) Nakagami-$m$ random variables (RVs) in terms of Meijer's G-function. Additionally, exact series representations are also derived for the sum of double-Nakagami RVs, providing useful insights on the trade-off between accuracy and computational cost. Simple asymptotic analytical expressions are provided to gain further insight into the derived formula, and the achievable diversity order is obtained. The suggested statistical properties are proved to be a highly useful tool for modeling parallel cascaded Nakagami-$m$ fading channels. The application of these new results is illustrated by deriving exact expressions and simple tight upper bounds for the outage probability (OP) and average symbol error rate (ASER) of several binary and multilevel modulation signals in intelligent reflecting surfaces (IRSs)-assisted communication systems operating over Nakagami-$m$ fading channels. It is demonstrated that the new asymptotic expression is highly accurate and can be extended to encompass a wider range of scenarios. To validate the theoretical frameworks and formulations, Monte-Carlo simulation results are presented. Additionally, supplementary simulations are provided to compare the derived results with two common types of approximations available in the literature, namely the central limit theorem (CLT) and gamma distribution.

Supporting applications in emerging domains like cyber-physical systems and human-in-the-loop scenarios typically requires adherence to strict end-to-end delay guarantees. Contributions of many tandem processes unfolding layer by layer within the wireless network result in violations of delay constraints, thereby severely degrading application performance. Meeting the application's stringent requirements necessitates coordinated optimization of the end-to-end delay by fine-tuning all contributing processes. To achieve this task, we designed and implemented EDAF, a framework to decompose packets' end-to-end delays and determine each component's significance for 5G network. We showcase EDAF on OpenAirInterface 5G uplink, modified to report timestamps across the data plane. By applying the obtained insights, we optimized end-to-end uplink delay by eliminating segmentation and frame-alignment delays, decreasing average delay from 12ms to 4ms.

Typically, training LLMs with long context sizes is computationally expensive, requiring extensive training hours and GPU resources. Existing long-context extension methods usually need additional training procedures to support corresponding long-context windows, where the long-context training data (e.g., 32k) is needed, and high GPU training costs are assumed. To address the aforementioned issues, we propose an Efficient and Extreme length extension method for Large Language Models, called E 2 -LLM, with only one training procedure and dramatically reduced computation cost, which also removes the need to collect long-context data. Concretely, first, the training data of our E 2 -LLM only requires a short length (e.g., 4k), which reduces the tuning cost greatly. Second, the training procedure on the short training context window is performed only once time, and we can support different evaluation context windows at inference. Third, in E 2 - LLM, based on RoPE position embeddings, we introduce two different augmentation methods on the scale and position index parameters for different samples in training. It aims to make the model more robust to the different relative differences when directly interpolating the arbitrary context length at inference. Comprehensive experimental results on multiple benchmark datasets demonstrate the effectiveness of our E 2 -LLM on challenging long-context tasks.

This work introduces E3x, a software package for building neural networks that are equivariant with respect to the Euclidean group $\mathrm{E}(3)$, consisting of translations, rotations, and reflections of three-dimensional space. Compared to ordinary neural networks, $\mathrm{E}(3)$-equivariant models promise benefits whenever input and/or output data are quantities associated with three-dimensional objects. This is because the numeric values of such quantities (e.g. positions) typically depend on the chosen coordinate system. Under transformations of the reference frame, the values change predictably, but the underlying rules can be difficult to learn for ordinary machine learning models. With built-in $\mathrm{E}(3)$-equivariance, neural networks are guaranteed to satisfy the relevant transformation rules exactly, resulting in superior data efficiency and accuracy. The code for E3x is available from //github.com/google-research/e3x, detailed documentation and usage examples can be found on //e3x.readthedocs.io.

Deployment of Internet of Things (IoT) devices and Data Fusion techniques have gained popularity in public and government domains. This usually requires capturing and consolidating data from multiple sources. As datasets do not necessarily originate from identical sensors, fused data typically results in a complex data problem. Because military is investigating how heterogeneous IoT devices can aid processes and tasks, we investigate a multi-sensor approach. Moreover, we propose a signal to image encoding approach to transform information (signal) to integrate (fuse) data from IoT wearable devices to an image which is invertible and easier to visualize supporting decision making. Furthermore, we investigate the challenge of enabling an intelligent identification and detection operation and demonstrate the feasibility of the proposed Deep Learning and Anomaly Detection models that can support future application that utilizes hand gesture data from wearable devices.

Click-through rate (CTR) prediction plays a critical role in recommender systems and online advertising. The data used in these applications are multi-field categorical data, where each feature belongs to one field. Field information is proved to be important and there are several works considering fields in their models. In this paper, we proposed a novel approach to model the field information effectively and efficiently. The proposed approach is a direct improvement of FwFM, and is named as Field-matrixed Factorization Machines (FmFM, or $FM^2$). We also proposed a new explanation of FM and FwFM within the FmFM framework, and compared it with the FFM. Besides pruning the cross terms, our model supports field-specific variable dimensions of embedding vectors, which acts as soft pruning. We also proposed an efficient way to minimize the dimension while keeping the model performance. The FmFM model can also be optimized further by caching the intermediate vectors, and it only takes thousands of floating-point operations (FLOPs) to make a prediction. Our experiment results show that it can out-perform the FFM, which is more complex. The FmFM model's performance is also comparable to DNN models which require much more FLOPs in runtime.

Graph convolution networks (GCN) are increasingly popular in many applications, yet remain notoriously hard to train over large graph datasets. They need to compute node representations recursively from their neighbors. Current GCN training algorithms suffer from either high computational costs that grow exponentially with the number of layers, or high memory usage for loading the entire graph and node embeddings. In this paper, we propose a novel efficient layer-wise training framework for GCN (L-GCN), that disentangles feature aggregation and feature transformation during training, hence greatly reducing time and memory complexities. We present theoretical analysis for L-GCN under the graph isomorphism framework, that L-GCN leads to as powerful GCNs as the more costly conventional training algorithm does, under mild conditions. We further propose L^2-GCN, which learns a controller for each layer that can automatically adjust the training epochs per layer in L-GCN. Experiments show that L-GCN is faster than state-of-the-arts by at least an order of magnitude, with a consistent of memory usage not dependent on dataset size, while maintaining comparable prediction performance. With the learned controller, L^2-GCN can further cut the training time in half. Our codes are available at //github.com/Shen-Lab/L2-GCN.

When labeled training data is scarce, a promising data augmentation approach is to generate visual features of unknown classes using their attributes. To learn the class conditional distribution of CNN features, these models rely on pairs of image features and class attributes. Hence, they can not make use of the abundance of unlabeled data samples. In this paper, we tackle any-shot learning problems i.e. zero-shot and few-shot, in a unified feature generating framework that operates in both inductive and transductive learning settings. We develop a conditional generative model that combines the strength of VAE and GANs and in addition, via an unconditional discriminator, learns the marginal feature distribution of unlabeled images. We empirically show that our model learns highly discriminative CNN features for five datasets, i.e. CUB, SUN, AWA and ImageNet, and establish a new state-of-the-art in any-shot learning, i.e. inductive and transductive (generalized) zero- and few-shot learning settings. We also demonstrate that our learned features are interpretable: we visualize them by inverting them back to the pixel space and we explain them by generating textual arguments of why they are associated with a certain label.

北京阿比特科技有限公司