亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper addresses the electromagnetic inverse scattering problem of determining the location and shape of anisotropic objects from near-field data. We investigate both cases involving the Helmholtz equation and Maxwell's equations for this inverse problem. Our study focuses on developing efficient imaging functionals that enable a fast and stable recovery of the anisotropic object. The implementation of the imaging functionals is simple and avoids the need to solve an ill-posed problem. The resolution analysis of the imaging functionals is conducted using the Green representation formula. Furthermore, we establish stability estimates for these imaging functionals when noise is present in the data. To illustrate the effectiveness of the methods, we present numerical examples showcasing their performance.

相關內容

The compact muon solenoid (CMS) experiment is a general-purpose detector for high-energy collision at the large hadron collider (LHC) at CERN. It employs an online data quality monitoring (DQM) system to promptly spot and diagnose particle data acquisition problems to avoid data quality loss. In this study, we present semi-supervised spatio-temporal anomaly detection (AD) monitoring for the physics particle reading channels of the hadronic calorimeter (HCAL) of the CMS using three-dimensional digi-occupancy map data of the DQM. We propose the GraphSTAD system, which employs convolutional and graph neural networks to learn local spatial characteristics induced by particles traversing the detector, and global behavior owing to shared backend circuit connections and housing boxes of the channels, respectively. Recurrent neural networks capture the temporal evolution of the extracted spatial features. We have validated the accuracy of the proposed AD system in capturing diverse channel fault types using the LHC Run-2 collision data sets. The GraphSTAD system has achieved production-level accuracy and is being integrated into the CMS core production system--for real-time monitoring of the HCAL. We have also provided a quantitative performance comparison with alternative benchmark models to demonstrate the promising leverage of the presented system.

Vision-based tactile sensors equipped with planar contact structures acquire the shape, force, and motion states of objects in contact. The limited planar contact area presents a challenge in acquiring information about larger target objects. In contrast, vision-based tactile sensors with cylindrical contact structures could extend the contact area by rolling, which can acquire much tactile information that exceeds the sensing projection area in a single contact. However, the tactile data acquired by cylindrical structures does not consistently correspond to the same depth level. Therefore, stitching and analyzing the data in an extended contact area is a challenging problem. In this work, we propose an image fusion method based on cylindrical vision-based tactile sensors. The method takes advantage of the changing characteristics of the contact depth of cylindrical structures, extracts the effective information of different contact depths in the frequency domain, and performs differential fusion for the information characteristics. The results show that in object contact confronting an area larger than single sensing, the images fused with our proposed method have higher information and structural similarity compared with the method of stitching based on motion distance sampling. Meanwhile, it is robust to sampling time. We complement this method with a deep neural network to illustrate its potential for fusing and recognizing object contact information using cylindrical vision-based tactile sensors.

This paper studies the application of cognitive radio inspired non-orthogonal multiple access (CR-NOMA) to reduce age of information (AoI) for uplink transmission. In particular, a time division multiple access (TDMA) based legacy network is considered, where each user is allocated with a dedicated time slot to transmit its status update information. The CR-NOMA is implemented as an add-on to the TDMA legacy network, which enables each user to have more opportunities to transmit by sharing other user's time slots. A rigorous analytical framework is developed to obtain the expressions for AoIs achieved by CR-NOMA with and without re-transmission, by taking the randomness of the status update generating process into consideration. Numerical results are presented to verify the accuracy of the developed analysis. It is shown that the AoI can be significantly reduced by applying CR-NOMA compared to TDMA. Moreover, the use of re-transmission is helpful to reduce AoI, especially when the status arrival rate is low.

This paper provides and extends second-order versions of several fundamental theorems on first-order regularly varying functions such as Karamata's theorem/representation and Tauberian's theorem. Our results are used to establish second-order approximations for the mean and variance of Hawkes processes with general kernels. Our approximations provide novel insights into the asymptotic behavior of Hawkes processes. They are also of key importance when establishing functional limit theorems for Hawkes processes.

This paper proposes a novel design of multi-symbol unitary constellation for non-coherent single-input multiple-output (SIMO) communications over block Rayleigh fading channels. To facilitate the design and the detection of large unitary constellations at reduced complexity, the proposed constellations are constructed as the Cartesian product of independent amplitude and phase-shift-keying (PSK) vectors, and hence, can be iteratively detected. The amplitude vector is detected by exhaustive search, whose complexity is sufficiently low in short packet transmission scenarios. To detect the PSK vector, we use the posterior probability as a reliability criterion in the sorted decision-feedback differential detection (sort-DFDD), which results in near-optimal error performance for PSK symbols with equal modulation orders. This detector is called posteriori-based-reliability-sort-DFDD (PR-sort-DFDD) and has polynomial complexity. We also propose an improved detector called improved-PR-sort-DFDD to detect a more generalized PSK structure, i.e., PSK symbols with unequal modulation orders. This detector also approaches the optimal error performance with polynomial complexity. Simulation results show the merits of our proposed multi-symbol unitary constellation when compared to competing low-complexity unitary constellations.

We proposed an extension of Akaike's relative power contribution that could be applied to data with correlations between noises. This method decomposes the power spectrum into a contribution of the terms caused by correlation between two noises, in addition to the contributions of the independent noises. Numerical examples confirm that some of the correlated noise has the effect of reducing the power spectrum.

This work suggests to optimize the geometry of a quadrupole magnet by means of a genetic algorithm adapted to solve multi-objective optimization problems. To that end, a non-domination sorting genetic algorithm known as NSGA-III is used. The optimization objectives are chosen such that a high magnetic field quality in the aperture of the magnet is guaranteed, while simultaneously the magnet design remains cost-efficient. The field quality is computed using a magnetostatic finite element model of the quadrupole, the results of which are post-processed and integrated into the optimization algorithm. An extensive analysis of the optimization results is performed, including Pareto front movements and identification of best designs.

Residual networks (ResNets) have displayed impressive results in pattern recognition and, recently, have garnered considerable theoretical interest due to a perceived link with neural ordinary differential equations (neural ODEs). This link relies on the convergence of network weights to a smooth function as the number of layers increases. We investigate the properties of weights trained by stochastic gradient descent and their scaling with network depth through detailed numerical experiments. We observe the existence of scaling regimes markedly different from those assumed in neural ODE literature. Depending on certain features of the network architecture, such as the smoothness of the activation function, one may obtain an alternative ODE limit, a stochastic differential equation or neither of these. These findings cast doubts on the validity of the neural ODE model as an adequate asymptotic description of deep ResNets and point to an alternative class of differential equations as a better description of the deep network limit.

With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of long-tailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existing solutions typically adopt class re-balancing strategies such as re-sampling and re-weighting based on the number of observations for each class. In this work, we argue that as the number of samples increases, the additional benefit of a newly added data point will diminish. We introduce a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. The effective number of samples is defined as the volume of samples and can be calculated by a simple formula $(1-\beta^{n})/(1-\beta)$, where $n$ is the number of samples and $\beta \in [0,1)$ is a hyperparameter. We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss. Comprehensive experiments are conducted on artificially induced long-tailed CIFAR datasets and large-scale datasets including ImageNet and iNaturalist. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets.

While it is nearly effortless for humans to quickly assess the perceptual similarity between two images, the underlying processes are thought to be quite complex. Despite this, the most widely used perceptual metrics today, such as PSNR and SSIM, are simple, shallow functions, and fail to account for many nuances of human perception. Recently, the deep learning community has found that features of the VGG network trained on the ImageNet classification task has been remarkably useful as a training loss for image synthesis. But how perceptual are these so-called "perceptual losses"? What elements are critical for their success? To answer these questions, we introduce a new Full Reference Image Quality Assessment (FR-IQA) dataset of perceptual human judgments, orders of magnitude larger than previous datasets. We systematically evaluate deep features across different architectures and tasks and compare them with classic metrics. We find that deep features outperform all previous metrics by huge margins. More surprisingly, this result is not restricted to ImageNet-trained VGG features, but holds across different deep architectures and levels of supervision (supervised, self-supervised, or even unsupervised). Our results suggest that perceptual similarity is an emergent property shared across deep visual representations.

北京阿比特科技有限公司