亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this study, we consider the application of orthogonality sampling method (OSM) with single and multiple sources for a fast identification of small objects in limited-aperture inverse scattering problem. We first apply the OSM with single source and show that the indicator function with single source can be expressed by the Bessel function of order zero of the first kind, infinite series of Bessel function of nonzero integer order of the first kind, range of signal receiver, and the location of emitter. Based on this result, we explain that the objects can be identified through the OSM with single source but the identification is significantly influenced by the location of source and applied frequency. For a successful improvement, we then consider the OSM with multiple sources. Based on the identified structure of the OSM with single source, we design an indicator function of the OSM with multiple sources and show that it can be expressed by the square of the Bessel function of order zero of the first kind an infinite series of the square of Bessel function of nonzero integer order of the first kind. Based on the theoretical results, we explain that the objects can be identified uniquely through the designed OSM. Several numerical experiments with experimental data provided by the Institute Fresnel demonstrate the pros and cons of the OSM with single source and how the designed OSM with multiple sources behave.

相關內容

In this paper, we consider a deep learning approach to the limited aperture inverse obstacle scattering problem. It is well known that traditional deep learning relies solely on data, which may limit its performance for the inverse problem when only indirect observation data and a physical model are available. A fundamental question arises in light of these limitations: is it possible to enable deep learning to work on inverse problems without labeled data and to be aware of what it is learning? This work proposes a deep decomposition method (DDM) for such purposes, which does not require ground truth labels. It accomplishes this by providing physical operators associated with the scattering model to the neural network architecture. Additionally, a deep learning based data completion scheme is implemented in DDM to prevent distorting the solution of the inverse problem for limited aperture data. Furthermore, apart from addressing the ill-posedness imposed by the inverse problem itself, DDM is a physics-aware machine learning technique that can have interpretability property. The convergence result of DDM is theoretically proven. Numerical experiments are presented to demonstrate the validity of the proposed DDM even when the incident and observation apertures are extremely limited.

In this article, a two-level overlapping domain decomposition preconditioner is developed for solving linear algebraic systems obtained from simulating Darcy flow in high-contrast media. Our preconditioner starts at a mixed finite element method for discretizing the partial differential equation by Darcy's law with the no-flux boundary condition and is then followed by a velocity elimination technique to yield a linear algebraic system with only unknowns of pressure. Then, our main objective is to design a robust and efficient domain decomposition preconditioner for this system, which is accomplished by engineering a multiscale coarse space that is capable of characterizing high-contrast features of the permeability field. A generalized eigenvalue problem is solved in each non-overlapping coarse element in a communication-free manner to form the global solver, which is accompanied by local solvers originated from additive Schwarz methods but with a non-Galerkin discretization to derive the two-level preconditioner. We provide a rigorous analysis that indicates that the condition number of the preconditioned system could be bounded above with several assumptions. Extensive numerical experiments with various types of three-dimensional high-contrast models are exhibited. In particular, we study the robustness against the contrast of the media as well as the influences of numbers of eigenfunctions, oversampling sizes, and subdomain partitions on the efficiency of the proposed preconditioner. Besides, strong and weak scalability performances are also examined.

In this paper, we develop a multigrid preconditioner to solve Darcy flow in highly heterogeneous porous media. The key component of the preconditioner is to construct a sequence of nested subspaces $W_{\mathcal{L}}\subset W_{\mathcal{L}-1}\subset\cdots\subset W_1=W_h$. An appropriate spectral problem is defined in the space of $W_{i-1}$, then the eigenfunctions of the spectral problems are utilized to form $W_i$. The preconditioner is applied to solve a positive semidefinite linear system which results from discretizing the Darcy flow equation with the lowest order Raviart-Thomas spaces and adopting a trapezoidal quadrature rule. Theoretical analysis and numerical investigations of this preconditioner will be presented. In particular, we will consider several typical highly heterogeneous permeability fields whose resolutions are up to $1024^3$ and examine the computational performance of the preconditioner in several aspects, such as strong scalability, weak scalability, and robustness against the contrast of the media. We also demonstrate an application of this preconditioner for solving a two-phase flow benchmark problem.

In this work, the infinite GMRES algorithm, recently proposed by Correnty et al., is employed in contour integral-based nonlinear eigensolvers, avoiding the computation of costly factorizations at each quadrature node to solve the linear systems efficiently. Several techniques are applied to make the infinite GMRES memory-friendly, computationally efficient, and numerically stable in practice. More specifically, we analyze the relationship between polynomial eigenvalue problems and their scaled linearizations, and provide a novel weighting strategy which can significantly accelerate the convergence of infinite GMRES in this particular context. We also adopt the technique of TOAR to infinite GMRES to reduce the memory footprint. Theoretical analysis and numerical experiments are provided to illustrate the efficiency of the proposed algorithm.

Indoor positioning using UWB technology has gained interest due to its centimeter-level accuracy potential. However, multipath effects and non-line-of-sight conditions cause ranging errors between anchors and tags. Existing approaches for mitigating these ranging errors rely on collecting large labeled datasets, making them impractical for real-world deployments. This paper proposes a novel self-supervised deep reinforcement learning approach that does not require labeled ground truth data. A reinforcement learning agent uses the channel impulse response as a state and predicts corrections to minimize the error between corrected and estimated ranges. The agent learns, self-supervised, by iteratively improving corrections that are generated by combining the predictability of trajectories with filtering and smoothening. Experiments on real-world UWB measurements demonstrate comparable performance to state-of-the-art supervised methods, overcoming data dependency and lack of generalizability limitations. This makes self-supervised deep reinforcement learning a promising solution for practical and scalable UWB-ranging error correction.

In this paper, we study different approaches for classifying emotions from speech using acoustic and text-based features. We propose to obtain contextualized word embeddings with BERT to represent the information contained in speech transcriptions and show that this results in better performance than using Glove embeddings. We also propose and compare different strategies to combine the audio and text modalities, evaluating them on IEMOCAP and MSP-PODCAST datasets. We find that fusing acoustic and text-based systems is beneficial on both datasets, though only subtle differences are observed across the evaluated fusion approaches. Finally, for IEMOCAP, we show the large effect that the criteria used to define the cross-validation folds have on results. In particular, the standard way of creating folds for this dataset results in a highly optimistic estimation of performance for the text-based system, suggesting that some previous works may overestimate the advantage of incorporating transcriptions.

In the context of the model-driven development of data-centric applications, OCL constraints play a major role in adding precision to the source models (e.g., data models and security models). Several code-generators have been proposed to bridge the gap between source models with OCL constraints and their corresponding database implementations. However, the database queries produced by these code-generators are significantly less efficient -- from the point of view of execution-time performance -- than the implementations manually written by database experts. In this paper, we propose a different approach to bridge the gap between models with OCL constraints and their corresponding database implementations. In particular, we introduce a model-based methodology for proving the correctness of manually written SQL implementations of OCL constraints. This methodology is based on a novel mapping from a significant subset of the SQL language into many-sorted first-order logic. Moreover, by leveraging on an already existing mapping from the OCL language into many-sorted first-order logic, we can use SMT solvers to automatically prove the correctness of SQL implementations of OCL constraints. To illustrate and show the applicability of our approach, we include in the paper a number of non-trivial examples. Finally, we report on the status of a suite of tools supporting our approach.

Upon the advent of the emerging metaverse and its related applications in Augmented Reality (AR), the current bit-oriented network struggles to support real-time changes for the vast amount of associated information, hindering its development. Thus, a critical revolution in the Sixth Generation (6G) networks is envisioned through the joint exploitation of information context and its importance to the task, leading to a communication paradigm shift towards semantic and effectiveness levels. However, current research has not yet proposed any explicit and systematic communication framework for AR applications that incorporate these two levels. To fill this research gap, this paper presents a task-oriented and semantics-aware communication framework for augmented reality (TSAR) to enhance communication efficiency and effectiveness in 6G. Specifically, we first analyse the traditional wireless AR point cloud communication framework and then summarize our proposed semantic information along with the end-to-end wireless communication. We then detail the design blocks of the TSAR framework, covering both semantic and effectiveness levels. Finally, numerous experiments have been conducted to demonstrate that, compared to the traditional point cloud communication framework, our proposed TSAR significantly reduces wireless AR application transmission latency by 95.6%, while improving communication effectiveness in geometry and color aspects by up to 82.4% and 20.4%, respectively.

In this paper, we propose the unfitted spectral element method for solving elliptic interface and corresponding eigenvalue problems. The novelty of the proposed method lies in its combination of the spectral accuracy of the spectral element method and the flexibility of the unfitted Nitsche's method. We also use tailored ghost penalty terms to enhance its robustness. We establish optimal $hp$ convergence rates for both elliptic interface problems and interface eigenvalue problems. Additionally, we demonstrate spectral accuracy for model problems in terms of polynomial degree.

In this paper, we propose a method to improve the accuracy of speech emotion recognition (SER) by using vision transformer (ViT) to attend to the correlation of frequency (y-axis) with time (x-axis) in spectrogram and transferring positional information between ViT through knowledge transfer. The proposed method has the following originality i) We use vertically segmented patches of log-Mel spectrogram to analyze the correlation of frequencies over time. This type of patch allows us to correlate the most relevant frequencies for a particular emotion with the time they were uttered. ii) We propose the use of image coordinate encoding, an absolute positional encoding suitable for ViT. By normalizing the x, y coordinates of the image to -1 to 1 and concatenating them to the image, we can effectively provide valid absolute positional information for ViT. iii) Through feature map matching, the locality and location information of the teacher network is effectively transmitted to the student network. Teacher network is a ViT that contains locality of convolutional stem and absolute position information through image coordinate encoding, and student network is a structure that lacks positional encoding in the basic ViT structure. In feature map matching stage, we train through the mean absolute error (L1 loss) to minimize the difference between the feature maps of the two networks. To validate the proposed method, three emotion datasets (SAVEE, EmoDB, and CREMA-D) consisting of speech were converted into log-Mel spectrograms for comparison experiments. The experimental results show that the proposed method significantly outperforms the state-of-the-art methods in terms of weighted accuracy while requiring significantly fewer floating point operations (FLOPs). Overall, the proposed method offers an promising solution for SER by providing improved efficiency and performance.

北京阿比特科技有限公司