Optical wireless communication (OWC) over atmospheric turbulence and pointing errors is a well-studied topic. Still, there is limited research on signal fading due to random fog in an outdoor environment for terrestrial wireless communications. In this paper, we analyze the performance of a decode-and-forward (DF) relaying under the combined effect of random fog, pointing errors, and atmospheric turbulence with a negligible line-of-sight (LOS) direct link. We consider a generalized model for the end-to-end channel with independent and not identically distributed (i.ni.d.) pointing errors, random fog with Gamma distributed attenuation coefficient, double generalized gamma (DGG) atmospheric turbulence, and asymmetrical distance between the source and destination. We develop density and distribution functions of signal-to-noise ratio (SNR) under the combined effect of random fog, pointing errors, and atmospheric turbulence (FPT) channel and distribution function for the combined channel with random fog and pointing errors (FP). Using the derived statistical results, we present analytical expressions of the outage probability, average SNR, ergodic rate, and average bit error rate (BER) for both FP and FPT channels in terms of OWC system parameters. We also develop simplified and asymptotic performance analysis to provide insight on the system behavior analytically under various practically relevant scenarios. We demonstrate the mutual effects of channel impairments and pointing errors on the OWC performance, and show that the relaying system provides significant performance improvement compared with the direct transmissions, especially when pointing errors and fog becomes more pronounced.
We built a vision system of curling robot which can be expected to play with human curling player. Basically, we built two types of vision systems for thrower and skip robots, respectively. First, the thrower robot drives towards a given point of curling sheet to release a stone. Our vision system in the thrower robot initialize 3DoF pose on two dimensional curling sheet and updates the pose to decide for the decision of stone release. Second, the skip robot stands at the opposite side of the thrower robot and monitors the state of the game to make a strategic decision. Our vision system in the skip robot recognize every stones on the curling sheet precisely. Since the viewpoint is quite perspective, many stones are occluded by each others so it is challenging to estimate the accurate position of stone. Thus, we recognize the ellipses of stone handles outline to find the exact midpoint of the stones using perspective Hough transform. Furthermore, we perform tracking of a thrown stone to produce a trajectory for ice condition analysis. Finally, we implemented our vision systems on two mobile robots and successfully perform a single turn and even careful gameplay. Specifically, our vision system includes three cameras with different viewpoint for their respective purposes.
Atmospheric turbulence deteriorates the quality of images captured by long-range imaging systems by introducing blur and geometric distortions to the captured scene. This leads to a drastic drop in performance when computer vision algorithms like object/face recognition and detection are performed on these images. In recent years, various deep learning-based atmospheric turbulence mitigation methods have been proposed in the literature. These methods are often trained using synthetically generated images and tested on real-world images. Hence, the performance of these restoration methods depends on the type of simulation used for training the network. In this paper, we systematically evaluate the effectiveness of various turbulence simulation methods on image restoration. In particular, we evaluate the performance of two state-or-the-art restoration networks using six simulations method on a real-world LRFID dataset consisting of face images degraded by turbulence. This paper will provide guidance to the researchers and practitioners working in this field to choose the suitable data generation models for training deep models for turbulence mitigation. The implementation codes for the simulation methods, source codes for the networks, and the pre-trained models will be publicly made available.
We apply a reinforcement meta-learning framework to optimize an integrated and adaptive guidance and flight control system for an air-to-air missile. The system is implemented as a policy that maps navigation system outputs directly to commanded rates of change for the missile's control surface deflections. The system induces intercept trajectories against a maneuvering target that satisfy control constraints on fin deflection angles, and path constraints on look angle and load. We test the optimized system in a six degrees-of-freedom simulator that includes a non-linear radome model and a strapdown seeker model, and demonstrate that the system adapts to both a large flight envelope and off-nominal flight conditions including perturbation of aerodynamic coefficient parameters and center of pressure locations, and flexible body dynamics. Moreover, we find that the system is robust to the parasitic attitude loop induced by radome refraction and imperfect seeker stabilization. We compare our system's performance to a longitudinal model of proportional navigation coupled with a three loop autopilot, and find that our system outperforms this benchmark by a large margin. Additional experiments investigate the impact of removing the recurrent layer from the policy and value function networks, performance with an infrared seeker, and flexible body dynamics.
As the next-generation wireless networks thrive, full-duplex and relaying techniques are combined to improve the network performance. Random linear network coding (RLNC) is another popular technique to enhance the efficiency and reliability in wireless communications. In this paper, in order to explore the potential of RLNC in full-duplex relay networks, we investigate two fundamental perfect RLNC schemes and theoretically analyze their completion delay performance. The first scheme is a straightforward application of conventional perfect RLNC studied in wireless broadcast, so it involves no additional process at the relay. Its performance serves as an upper bound among all perfect RLNC schemes. The other scheme allows sufficiently large buffer and unconstrained linear coding at the relay. It attains the optimal performance and serves as a lower bound among all RLNC schemes. For both schemes, closed-form formulae to characterize the expected completion delay at a single receiver as well as for the whole system are derived. Numerical results are also demonstrated to justify the theoretical characterizations, and compare the two new schemes with the existing one.
In recent years, establishing secure visual communications has turned into one of the essential problems for security engineers and researchers. However, only limited novel solutions are provided for image encryption, and limiting the visual cryptography to only limited schemes can bring up negative consequences, especially with emerging quantum computational systems. This paper presents a novel algorithm for establishing secure private visual communication. The proposed method has a layered architecture with several cohesive components, and corresponded with an NP-hard problem, despite its symmetric structure. This two-step technique is not limited to gray-scale pictures, and furthermore, utilizing a lattice structure causes to proposed method has optimal resistance for the post-quantum era, and is relatively secure from the theoretical dimension.
Federated learning (FL) has been recognized as a viable distributed learning paradigm which trains a machine learning model collaboratively with massive mobile devices in the wireless edge while protecting user privacy. Although various communication schemes have been proposed to expedite the FL process, most of them have assumed ideal wireless channels which provide reliable and lossless communication links between the server and mobile clients. Unfortunately, in practical systems with limited radio resources such as constraint on the training latency and constraints on the transmission power and bandwidth, transmission of a large number of model parameters inevitably suffers from quantization errors (QE) and transmission outage (TO). In this paper, we consider such non-ideal wireless channels, and carry out the first analysis showing that the FL convergence can be severely jeopardized by TO and QE, but intriguingly can be alleviated if the clients have uniform outage probabilities. These insightful results motivate us to propose a robust FL scheme, named FedTOE, which performs joint allocation of wireless resources and quantization bits across the clients to minimize the QE while making the clients have the same TO probability. Extensive experimental results are presented to show the superior performance of FedTOE for deep learning-based classification tasks with transmission latency constraints.
Multihop relaying is a potential technique to mitigate channel impairments in optical wireless communications (OWC). In this paper, multiple fixed-gain amplify-and-forward (AF) relays are employed to enhance the OWC performance under the combined effect of atmospheric turbulence, pointing errors, and fog. We consider a long-range OWC link by modeling the atmospheric turbulence by the Fisher-Snedecor ${\cal{F}}$ distribution, pointing errors by the generalized non-zero boresight model, and random path loss due to fog. We also consider a short-range OWC system by ignoring the impact of atmospheric turbulence. We derive novel upper bounds on the probability density function (PDF) and cumulative distribution function (CDF) of the end-to-end signal-to-noise ratio (SNR) for both short and long-range multihop OWC systems by developing exact statistical results for a single-hop OWC system under the combined effect of ${\cal{F}}$-turbulence channels, non-zero boresight pointing errors, and fog-induced fading. Based on these expressions, we present analytical expressions of outage probability (OP) and average bit-error-rate (ABER) performance for the considered OWC systems involving single-variate Fox's H and Meijer's G functions. Moreover, asymptotic expressions of the outage probability in high SNR region are developed using simpler Gamma functions to provide insights on the effect of channel and system parameters. The derived analytical expressions are validated through Monte-Carlo simulations, and the scaling of the OWC performance with the number of relay nodes is demonstrated with a comparison to the single-hop transmission.
Ambient lighting conditions play a crucial role in determining the perceptual quality of images from photographic devices. In general, inadequate transmission light and undesired atmospheric conditions jointly degrade the image quality. If we know the desired ambient factors associated with the given low-light image, we can recover the enhanced image easily \cite{b1}. Typical deep networks perform enhancement mappings without investigating the light distribution and color formulation properties. This leads to a lack of image instance-adaptive performance in practice. On the other hand, physical model-driven schemes suffer from the need for inherent decompositions and multiple objective minimizations. Moreover, the above approaches are rarely data efficient or free of postprediction tuning. Influenced by the above issues, this study presents a semisupervised training method using no-reference image quality metrics for low-light image restoration. We incorporate the classical haze distribution model \cite{b2} to explore the physical properties of the given image in order to learn the effect of atmospheric components and minimize a single objective for restoration. We validate the performance of our network for six widely used low-light datasets. The experiments show that the proposed study achieves state-of-the-art or comparable performance.
Tensor PCA is a stylized statistical inference problem introduced by Montanari and Richard to study the computational difficulty of estimating an unknown parameter from higher-order moment tensors. Unlike its matrix counterpart, Tensor PCA exhibits a statistical-computational gap, i.e., a sample size regime where the problem is information-theoretically solvable but conjectured to be computationally hard. This paper derives computational lower bounds on the run-time of memory bounded algorithms for Tensor PCA using communication complexity. These lower bounds specify a trade-off among the number of passes through the data sample, the sample size, and the memory required by any algorithm that successfully solves Tensor PCA. While the lower bounds do not rule out polynomial-time algorithms, they do imply that many commonly-used algorithms, such as gradient descent and power method, must have a higher iteration count when the sample size is not large enough. Similar lower bounds are obtained for Non-Gaussian Component Analysis, a family of statistical estimation problems in which low-order moment tensors carry no information about the unknown parameter. Finally, stronger lower bounds are obtained for an asymmetric variant of Tensor PCA and related statistical estimation problems. These results explain why many estimators for these problems use a memory state that is significantly larger than the effective dimensionality of the parameter of interest.
This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.