亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Many Electromagnetic time reversal (EMTR)-based fault location methods were proposed in the latest decade. In this paper, we briefly review the EMTR-based fault location method using direct convolution (EMTR-conv) and generalize it to multi-phase transmission lines. Moreover, noting that the parameters of real transmission lines are frequency-dependent, while constant-parameters were often used during the reverse process of EMTR-based methods in the previous studies, we investigate the influence of this simplification to the fault location performance by considering frequency-dependent parameters and lossy ground in the forward process which shows the location error increases as the distance between the observation point and the fault position increases, especially when the ground resistivity is high. Therefore, we propose a correction method to reduce the location error by using double observation points. Numerical experiments are carried out in a 3-phase 300-km transmission line considering different ground resistivities, fault types and fault conditions, which shows the method gives good location errors and works efficiently via direct convolution of the signals collected from the fault and the pre-stored calculated transient signals.

相關內容

When deploying autonomous systems that require several sensors for perception, accurate and reliable extrinsic calibration is required. In this research, we offer a reliable technique that can extrinsically calibrate numerous lidars in the base frame of a moving vehicle without the use of odometry estimation or fiducial markers. Our method is based on comparing the raw IMU signals between a collocated IMU present with the lidar and the IMU measurements from the GNSS system in the vehicle base frame. Additionally, based on our observability criterion, we choose measurements that include the most mutual information rather than comparing all comparable IMU readings. This enables us to locate the measurements that are most useful for real-time calibration. Utilizing data gathered from Scania test vehicles with various sensor setups, we have successfully validated our methodology.

The stopp R package deals with spatio-temporal point processes which might have occurred on the Euclidean space or on some specific linear networks such as roads of a city. The package contains functions to summarize, plot, and perform different kinds of analyses on point processes, mainly following the methods proposed in some recent papers in the stream of scientific literature. The main topics of such works, and of the package in turn, include modeling, statistical inference, and simulation issues on spatio-temporal point processes on Euclidean space and linear networks, with a focus on their local characteristics. We contribute to the existing literature by collecting many of the most widespread methods for the analysis of spatio-temporal point processes into a unique package, which is intended to welcome many further proposals and extensions.

Autonomous vehicles have limited computational resources; hence, their control systems must be efficient. The cost and size of sensors have limited the development of self-driving cars. To overcome these restrictions, this study proposes an efficient framework for the operation of vision-based automatic vehicles; the framework requires only a monocular camera and a few inexpensive radars. The proposed algorithm comprises a multi-task UNet (MTUNet) network for extracting image features and constrained iterative linear quadratic regulator (CILQR) and vision predictive control (VPC) modules for rapid motion planning and control. MTUNet is designed to simultaneously solve lane line segmentation, the ego vehicle's heading angle regression, road type classification, and traffic object detection tasks at approximately 40 FPS (frames per second) for 228 x 228 pixel RGB input images. The CILQR controllers then use the MTUNet outputs and radar data as inputs to produce driving commands for lateral and longitudinal vehicle guidance within only 1 ms. In particular, the VPC algorithm is included to reduce steering command latency to below actuator latency to prevent vehicle understeer during tight turns. The VPC algorithm uses road curvature data from MTUNet to estimate the correction of the current steering angle at a look-ahead point to adjust the turning amount. Including the VPC algorithm in a VPC-CILQR controller leads to higher performance than CILQR alone; this controller can minimize the influence of command lag, maintaining the ego car's speed and lateral offset at 76 km/h and within 0.52 m, respectively, on a simulated road with a curvature of 0.03 1/m. Our experiments demonstrate that the proposed autonomous driving system, which does not require high-definition maps, could be applied in current autonomous vehicles.

Cellular-connected unmanned aerial vehicle (UAV) has attracted a surge of research interest in both academia and industry. To support aerial user equipment (UEs) in the existing cellular networks, one promising approach is to assign a portion of the system bandwidth exclusively to the UAV-UEs. This is especially favorable for use cases where a large number of UAV-UEs are exploited, e.g., for package delivery close to a warehouse. Although the nearly line-of-sight (LoS) channels can result in higher powers received, UAVs can in turn cause severe interference to each other in the same frequency band. In this contribution, we focus on the uplink communications of massive cellular-connected UAVs. Different power allocation algorithms are proposed to either maximize the minimal spectrum efficiency (SE) or maximize the overall SE to cope with severe interference based on the successive convex approximation (SCA) principle. One of the challenges is that a UAV can affect a large area meaning that many more UAV-UEs must be considered in the optimization problem, which is essentially different from that for terrestrial UEs. The necessity of single-carrier uplink transmission further complicates the problem. Nevertheless, we find that the special property of large coherent bandwidths and coherent times of the propagation channels can be leveraged. The performances of the proposed algorithms are evaluated via extensive simulations in the full-buffer transmission mode and bursty-traffic mode. Results show that the proposed algorithms can effectively enhance the uplink SEs. This work can be considered the first attempt to deal with the interference among massive cellular-connected UAV-UEs with optimized power allocations.

We introduce the Weak-form Estimation of Nonlinear Dynamics (WENDy) method for estimating model parameters for non-linear systems of ODEs. The core mathematical idea involves an efficient conversion of the strong form representation of a model to its weak form, and then solving a regression problem to perform parameter inference. The core statistical idea rests on the Errors-In-Variables framework, which necessitates the use of the iteratively reweighted least squares algorithm. Further improvements are obtained by using orthonormal test functions, created from a set of $C^{\infty}$ bump functions of varying support sizes. We demonstrate that WENDy is a highly robust and efficient method for parameter inference in differential equations. Without relying on any numerical differential equation solvers, WENDy computes accurate estimates and is robust to large (biologically relevant) levels of measurement noise. For low dimensional systems with modest amounts of data, WENDy is competitive with conventional forward solver-based nonlinear least squares methods in terms of speed and accuracy. For both higher dimensional systems and stiff systems, WENDy is typically both faster (often by orders of magnitude) and more accurate than forward solver-based approaches. We illustrate the method and its performance in some common population and neuroscience models, including logistic growth, Lotka-Volterra, FitzHugh-Nagumo, Hindmarsh-Rose, and a Protein Transduction Benchmark model. Software and code for reproducing the examples is available at (//github.com/MathBioCU/WENDy).

APT traffic detection is an important task in network security domain, which is of great significance in the field of enterprise security. Most APT traffic uses encrypted communication protocol as data transmission medium, which greatly increases the difficulty of detection. This paper analyzes the existing problems of current APT encrypted traffic detection methods based on machine learning, and proposes an APT encrypted traffic detection method based on two parties and multi-session. This method only needs to extract a small amount of features, such as session sequence, session time interval, upstream and downstream data size, and convert them into images. Then convolutional neural network method can be used to realize image recognition. Thus, network traffic identification can be realized too. In the preliminary test of five experiments, this method achieves good experimental results, which verifies the effectiveness of the method.

This work considers the low-rank approximation of a matrix $A(t)$ depending on a parameter $t$ in a compact set $D \subset \mathbb{R}^d$. Application areas that give rise to such problems include computational statistics and dynamical systems. Randomized algorithms are an increasingly popular approach for performing low-rank approximation and they usually proceed by multiplying the matrix with random dimension reduction matrices (DRMs). Applying such algorithms directly to $A(t)$ would involve different, independent DRMs for every $t$, which is not only expensive but also leads to inherently non-smooth approximations. In this work, we propose to use constant DRMs, that is, $A(t)$ is multiplied with the same DRM for every $t$. The resulting parameter-dependent extensions of two popular randomized algorithms, the randomized singular value decomposition and the generalized Nystr\"{o}m method, are computationally attractive, especially when $A(t)$ admits an affine linear decomposition with respect to $t$. We perform a probabilistic analysis for both algorithms, deriving bounds on the expected value as well as failure probabilities for the $L^2$ approximation error when using Gaussian random DRMs. Both, the theoretical results and numerical experiments, show that the use of constant DRMs does not impair their effectiveness; our methods reliably return quasi-best low-rank approximations.

The symmetric $C^0$ interior penalty method is one of the most popular discontinuous Galerkin methods for the biharmonic equation. This paper introduces an automatic local selection of the involved stability parameter in terms of the geometry of the underlying triangulation for arbitrary polynomial degrees. The proposed choice ensures a stable discretization with guaranteed discrete ellipticity constant. Numerical evidence for uniform and adaptive mesh-refinement and various polynomial degrees supports the reliability and efficiency of the local parameter selection and recommends this in practice. The approach is documented in 2D for triangles, but the methodology behind can be generalized to higher dimensions, to non-uniform polynomial degrees, and to rectangular discretizations. Two appendices present the realization of our proposed parameter selection in various established finite element software packages as well as a detailed documentation of a self-contained MATLAB program for the lowest-order $C^0$ interior penalty method.

Machine learning models have been deployed in mobile networks to deal with massive data from different layers to enable automated network management and intelligence on devices. To overcome high communication cost and severe privacy concerns of centralized machine learning, federated learning (FL) has been proposed to achieve distributed machine learning among networked devices. While the computation and communication limitation has been widely studied, the impact of on-device storage on the performance of FL is still not explored. Without an effective data selection policy to filter the massive streaming data on devices, classical FL can suffer from much longer model training time ($4\times$) and significant inference accuracy reduction ($7\%$), observed in our experiments. In this work, we take the first step to consider the online data selection for FL with limited on-device storage. We first define a new data valuation metric for data evaluation and selection in FL with theoretical guarantees for speeding up model convergence and enhancing final model accuracy, simultaneously. We further design {\ttfamily ODE}, a framework of \textbf{O}nline \textbf{D}ata s\textbf{E}lection for FL, to coordinate networked devices to store valuable data samples. Experimental results on one industrial dataset and three public datasets show the remarkable advantages of {\ttfamily ODE} over the state-of-the-art approaches. Particularly, on the industrial dataset, {\ttfamily ODE} achieves as high as $2.5\times$ speedup of training time and $6\%$ increase in inference accuracy, and is robust to various factors in practical environments.

Dynamic Algorithm Configuration (DAC) tackles the question of how to automatically learn policies to control parameters of algorithms in a data-driven fashion. This question has received considerable attention from the evolutionary community in recent years. Having a good benchmark collection to gain structural understanding on the effectiveness and limitations of different solution methods for DAC is therefore strongly desirable. Following recent work on proposing DAC benchmarks with well-understood theoretical properties and ground truth information, in this work, we suggest as a new DAC benchmark the controlling of the key parameter $\lambda$ in the $(1+(\lambda,\lambda))$~Genetic Algorithm for solving OneMax problems. We conduct a study on how to solve the DAC problem via the use of (static) automated algorithm configuration on the benchmark, and propose techniques to significantly improve the performance of the approach. Our approach is able to consistently outperform the default parameter control policy of the benchmark derived from previous theoretical work on sufficiently large problem sizes. We also present new findings on the landscape of the parameter-control search policies and propose methods to compute stronger baselines for the benchmark via numerical approximations of the true optimal policies.

北京阿比特科技有限公司