亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present an implicit-explicit finite volume scheme for two-fluid single-temperature flow in all Mach number regimes which is based on a symmetric hyperbolic thermodynamically compatible description of the fluid flow. The scheme is stable for large time steps controlled by the interface transport and is computational efficient due to a linear implicit character. The latter is achieved by linearizing along constant reference states given by the asymptotic analysis of the single-temperature model. Thus, the use of a stiffly accurate IMEX Runge Kutta time integration and the centered treatment of pressure based quantities provably guarantee the asymptotic preserving property of the scheme for weakly compressible Euler equations with variable volume fraction. The properties of the first and second order scheme are validated by several numerical test cases.

相關內容

I consider the natural infinitary variations of the games Wordle and Mastermind, as well as their game-theoretic variations Absurdle and Madstermind, considering these games with infinitely long words and infinite color sequences and allowing transfinite game play. For each game, a secret codeword is hidden, which the codebreaker attempts to discover by making a series of guesses and receiving feedback as to their accuracy. In Wordle with words of any size from a finite alphabet of $n$ letters, including infinite words or even uncountable words, the codebreaker can nevertheless always win in $n$ steps. Meanwhile, the mastermind number, defined as the smallest winning set of guesses in infinite Mastermind for sequences of length $\omega$ over a countable set of colors without duplication, is uncountable, but the exact value turns out to be independent of ZFC, for it is provably equal to the eventually different number $\frak{d}({\neq^*})$, which is the same as the covering number of the meager ideal $\text{cov}(\mathcal{M})$. I thus place all the various mastermind numbers, defined for the natural variations of the game, into the hierarchy of cardinal characteristics of the continuum.

Hyperspectral imagery contains abundant spectral information beyond the visible RGB bands, providing rich discriminative details about objects in a scene. Leveraging such data has the potential to enhance visual tracking performance. While prior hyperspectral trackers employ CNN or hybrid CNN-Transformer architectures, we propose a novel approach HPFormer on Transformers to capitalize on their powerful representation learning capabilities. The core of HPFormer is a Hyperspectral Hybrid Attention (HHA) module which unifies feature extraction and fusion within one component through token interactions. Additionally, a Transform Band Module (TBM) is introduced to selectively aggregate spatial details and spectral signatures from the full hyperspectral input for injecting informative target representations. Extensive experiments demonstrate state-of-the-art performance of HPFormer on benchmark NIR and VIS tracking datasets. Our work provides new insights into harnessing the strengths of transformers and hyperspectral fusion to advance robust object tracking.

We propose an automatic data processing pipeline to extract vocal productions from large-scale natural audio recordings and classify these vocal productions. The pipeline is based on a deep neural network and adresses both issues simultaneously. Though a series of computationel steps (windowing, creation of a noise class, data augmentation, re-sampling, transfer learning, Bayesian optimisation), it automatically trains a neural network without requiring a large sample of labeled data and important computing resources. Our end-to-end methodology can handle noisy recordings made under different recording conditions. We test it on two different natural audio data sets, one from a group of Guinea baboons recorded from a primate research center and one from human babies recorded at home. The pipeline trains a model on 72 and 77 minutes of labeled audio recordings, with an accuracy of 94.58% and 99.76%. It is then used to process 443 and 174 hours of natural continuous recordings and it creates two new databases of 38.8 and 35.2 hours, respectively. We discuss the strengths and limitations of this approach that can be applied to any massive audio recording.

A continuous-time Markov chain rate change formula for simulation, model selection, filtering and theory is proven. It is used to develop Markov chain importance sampling, rejection sampling, branching particle filtering algorithms and filtering equations akin to the Duncan-Mortensen-Zakai equation and the Fujisaki-Kallianpur-Kunita equation but for Markov signals with general continuous-time Markov chain observations. A direct method of solving these filtering equations is given that, for example, applies to trend, volatility and/or parameter estimation in financial models given tick-by-tick market data. All the results also apply to continuous-time Hidden Markov Models (CTHMM), which have become important in applications like disease progression tracking, as special cases and the corresponding CTHMM results are stated as corollaries.

We propose a matrix-free parallel two-level-deflation preconditioner combined with the Complex Shifted Laplacian preconditioner(CSLP) for the two-dimensional Helmholtz problems. The Helmholtz equation is widely studied in seismic exploration, antennas, and medical imaging. It is one of the hardest problems to solve both in terms of accuracy and convergence, due to scalability issues of the numerical solvers. Motivated by the observation that for large wavenumbers, the eigenvalues of the CSLP-preconditioned system shift towards zero, deflation with multigrid vectors, and further high-order vectors were incorporated to obtain wave-number-independent convergence. For large-scale applications, high-performance parallel scalable methods are also indispensable. In our method, we consider the preconditioned Krylov subspace methods for solving the linear system obtained from finite-difference discretization. The CSLP preconditioner is approximated by one parallel geometric multigrid V-cycle. For the two-level deflation, the matrix-free Galerkin coarsening as well as high-order re-discretization approaches on the coarse grid are studied. The results of matrix-vector multiplications in Krylov subspace methods and the interpolation/restriction operators are implemented based on the finite-difference grids without constructing any coefficient matrix. These adjustments lead to direct improvements in terms of memory consumption. Numerical experiments of model problems show that wavenumber independence has been obtained for medium wavenumbers. The matrix-free parallel framework shows satisfactory weak and strong parallel scalability.

The generation of co-speech gestures for digital humans is an emerging area in the field of virtual human creation. Prior research has made progress by using acoustic and semantic information as input and adopting classify method to identify the person's ID and emotion for driving co-speech gesture generation. However, this endeavour still faces significant challenges. These challenges go beyond the intricate interplay between co-speech gestures, speech acoustic, and semantics; they also encompass the complexities associated with personality, emotion, and other obscure but important factors. This paper introduces "diffmotion-v2," a speech-conditional diffusion-based and non-autoregressive transformer-based generative model with WavLM pre-trained model. It can produce individual and stylized full-body co-speech gestures only using raw speech audio, eliminating the need for complex multimodal processing and manually annotated. Firstly, considering that speech audio not only contains acoustic and semantic features but also conveys personality traits, emotions, and more subtle information related to accompanying gestures, we pioneer the adaptation of WavLM, a large-scale pre-trained model, to extract low-level and high-level audio information. Secondly, we introduce an adaptive layer norm architecture in the transformer-based layer to learn the relationship between speech information and accompanying gestures. Extensive subjective evaluation experiments are conducted on the Trinity, ZEGGS, and BEAT datasets to confirm the WavLM and the model's ability to synthesize natural co-speech gestures with various styles.

By combining a logarithm transformation with a corrected Milstein-type method, the present article proposes an explicit, unconditional boundary and dynamics preserving scheme for the stochastic susceptible-infected-susceptible (SIS) epidemic model that takes value in (0,N). The scheme applied to the model is first proved to have a strong convergence rate of order one. Further, the dynamic behaviors are analyzed for the numerical approximations and it is shown that the scheme can unconditionally preserve both the domain and the dynamics of the model. More precisely, the proposed scheme gives numerical approximations living in the domain (0,N) and reproducing the extinction and persistence properties of the original model for any time discretization step-size h > 0, without any additional requirements on the model parameters. Numerical experiments are presented to verify our theoretical results.

In this work, we first develop a general mesoscopic multiple-relaxation-time lattice Boltzmann (MRT-LB) model for the two-dimensional diffusion equation with the constant diffusion coefficient and source term, where the D2Q5 (five discrete velocities in two-dimensional space) lattice structure is considered. Then we exactly derive the equivalent macroscopic finite-difference scheme of the MRT-LB model. Additionally, we also propose a proper MRT-LB model for the diffusion equation with a linear source term, and obtain an equivalent macroscopic six-level finite-difference scheme. After that, we conduct the accuracy and stability analysis of the finite-difference scheme and the mesoscopic MRT-LB model. It is found that at the diffusive scaling, both of them can achieve a fourth-order accuracy in space based on the Taylor expansion. The stability analysis also shows that they are both unconditionally stable. Finally, some numerical experiments are conducted, and the numerical results are also consistent with our theoretical analysis.

Recently pre-trained language representation models such as BERT have shown great success when fine-tuned on downstream tasks including information retrieval (IR). However, pre-training objectives tailored for ad-hoc retrieval have not been well explored. In this paper, we propose Pre-training with Representative wOrds Prediction (PROP) for ad-hoc retrieval. PROP is inspired by the classical statistical language model for IR, specifically the query likelihood model, which assumes that the query is generated as the piece of text representative of the "ideal" document. Based on this idea, we construct the representative words prediction (ROP) task for pre-training. Given an input document, we sample a pair of word sets according to the document language model, where the set with higher likelihood is deemed as more representative of the document. We then pre-train the Transformer model to predict the pairwise preference between the two word sets, jointly with the Masked Language Model (MLM) objective. By further fine-tuning on a variety of representative downstream ad-hoc retrieval tasks, PROP achieves significant improvements over baselines without pre-training or with other pre-training methods. We also show that PROP can achieve exciting performance under both the zero- and low-resource IR settings. The code and pre-trained models are available at //github.com/Albert-Ma/PROP.

We propose a novel single shot object detection network named Detection with Enriched Semantics (DES). Our motivation is to enrich the semantics of object detection features within a typical deep detector, by a semantic segmentation branch and a global activation module. The segmentation branch is supervised by weak segmentation ground-truth, i.e., no extra annotation is required. In conjunction with that, we employ a global activation module which learns relationship between channels and object classes in a self-supervised manner. Comprehensive experimental results on both PASCAL VOC and MS COCO detection datasets demonstrate the effectiveness of the proposed method. In particular, with a VGG16 based DES, we achieve an mAP of 81.7 on VOC2007 test and an mAP of 32.8 on COCO test-dev with an inference speed of 31.5 milliseconds per image on a Titan Xp GPU. With a lower resolution version, we achieve an mAP of 79.7 on VOC2007 with an inference speed of 13.0 milliseconds per image.

北京阿比特科技有限公司