亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The ICESat-2, launched in 2018, carries the ATLAS instrument, which is a photon-counting spaceborne lidar that provides strip samples over the terrain. While primarily designed for snow and ice monitoring, there has been a great interest in using ICESat-2 to predict forest above-ground biomass density (AGBD). As ICESat-2 is on a polar orbit, it provides good spatial coverage of boreal forests. The aim of this study is to evaluate the estimation of mean AGBD from ICESat-2 data using a hierarchical modeling approach combined with rigorous statistical inference. We propose a hierarchical hybrid inference approach for uncertainty quantification of the AGBD estimated from ICESat-2 lidar strips. Our approach models the errors coming from the multiple modeling steps, including the allometric models used for predicting tree-level AGB. For testing the procedure, we have data from two adjacent study sites, denoted Valtimo and Nurmes, of which Valtimo site is used for model training and Nurmes for validation. The ICESat-2 estimated mean AGBD in the Nurmes validation area was 63.2$\pm$1.9 Mg/ha (relative standard error of 2.9%). The local reference hierarchical model-based estimate obtained from wall-to-wall airborne lidar data was 63.9$\pm$0.6 Mg/ha (relative standard error of 1.0%). The reference estimate was within the 95% confidence interval of the ICESat-2 hierarchical hybrid estimate. The small standard errors indicate that the proposed method is useful for AGBD assessment. However, some sources of error were not accounted for in the study and thus the real uncertainties are probably slightly larger than those reported.

相關內容

Assouad-Nagata dimension addresses both large and small scale behaviors of metric spaces and is a refinement of Gromov's asymptotic dimension. A metric space $M$ is a minor-closed metric if there exists an (edge-)weighted graph $G$ in a fixed minor-closed family such that the underlying space of $M$ is the vertex-set of $G$, and the metric of $M$ is the distance function in $G$. Minor-closed metrics naturally arise when removing redundant edges of the underlying graphs by using edge-deletion and edge-contraction. In this paper, we determine the Assouad-Nagata dimension of every minor-closed metric. It is a common generalization of known results for the asymptotic dimension of $H$-minor free unweighted graphs and the Assouad-Nagata dimension of some 2-dimensional continuous spaces (e.g.\ complete Riemannian surfaces with finite Euler genus) and their corollaries.

We introduce a unified framework of symmetric resonance based schemes which preserve central symmetries of the underlying PDE. We extend the resonance decorated trees approach introduced in arXiv:2005.01649 to a richer framework by exploring novel ways of iterating Duhamel's formula, capturing the dominant parts while interpolating the lower parts of the resonances in a symmetric manner. This gives a general class of new numerical schemes with more degrees of freedom than the original scheme from arXiv:2005.01649. To encapsulate the central structures we develop new forest formulae that contain the previous class of schemes and derive conditions on their coefficients in order to obtain symmetric schemes. These forest formulae echo the one used in Quantum Field Theory for renormalising Feynman diagrams and the one used for the renormalisation of singular SPDEs via the theory of Regularity Structures. These new algebraic tools not only provide a nice parametrisation of the previous resonance based integrators but also allow us to find new symmetric schemes with remarkable structure preservation properties even at very low regularity.

Automated Audio Captioning (AAC) involves generating natural language descriptions of audio content, using encoder-decoder architectures. An audio encoder produces audio embeddings fed to a decoder, usually a Transformer decoder, for caption generation. In this work, we describe our model, which novelty, compared to existing models, lies in the use of a ConvNeXt architecture as audio encoder, adapted from the vision domain to audio classification. This model, called CNext-trans, achieved state-of-the-art scores on the AudioCaps (AC) dataset and performed competitively on Clotho (CL), while using four to forty times fewer parameters than existing models. We examine potential biases in the AC dataset due to its origin from AudioSet by investigating unbiased encoder's impact on performance. Using the well-known PANN's CNN14, for instance, as an unbiased encoder, we observed a 1.7% absolute reduction in SPIDEr score (where higher scores indicate better performance). To improve cross-dataset performance, we conducted experiments by combining multiple AAC datasets (AC, CL, MACS, WavCaps) for training. Although this strategy enhanced overall model performance across datasets, it still fell short compared to models trained specifically on a single target dataset, indicating the absence of a one-size-fits-all model. To mitigate performance gaps between datasets, we introduced a Task Embedding (TE) token, allowing the model to identify the source dataset for each input sample. We provide insights into the impact of these TEs on both the form (words) and content (sound event types) of the generated captions. The resulting model, named CoNeTTE, an unbiased CNext-trans model enriched with dataset-specific Task Embeddings, achieved SPIDEr scores of 44.1% and 30.5% on AC and CL, respectively. Code available: //github.com/Labbeti/conette-audio-captioning.

This paper studies the problem of Zero-Knowledge Protocol (ZKP) and elliptic curve cryptographic implementation in a computationally limited environment, such as, the smart cards, using Java Card. Besides that, it is explained how the zero-knowledge protocol was selected to implement it on a smart card and how the benchmarking was conducted to select this protocol. The paper also shows a theoretical development to implement the ZKP protocol using elliptic curve cryptography. Keywords: Authentication; Zero-knowledge; Cryptography; Elliptic Curve; Java card; Smart cards

We investigate the ill-posed inverse problem of recovering unknown spatially dependent parameters in nonlinear evolution PDEs. We propose a bi-level Landweber scheme, where the upper-level parameter reconstruction embeds a lower-level state approximation. This can be seen as combining the classical reduced setting and the newer all-at-once setting, allowing us to, respectively, utilize well-posedness of the parameter-to-state map, and to bypass having to solve nonlinear PDEs exactly. Using this, we derive stopping rules for lower- and upper-level iterations and convergence of the bi-level method. We discuss application to parameter identification for the Landau-Lifshitz-Gilbert equation in magnetic particle imaging.

Model order reduction provides low-complexity high-fidelity surrogate models that allow rapid and accurate solutions of parametric differential equations. The development of reduced order models for parametric nonlinear Hamiltonian systems is still challenged by several factors: (i) the geometric structure encoding the physical properties of the dynamics; (ii) the slowly decaying Kolmogorov $n$-width of conservative dynamics; (iii) the gradient structure of the nonlinear flow velocity; (iv) high variations in the numerical rank of the state as a function of time and parameters. We propose to address these aspects via a structure-preserving adaptive approach that combines symplectic dynamical low-rank approximation with adaptive gradient-preserving hyper-reduction and parameters sampling. Additionally, we propose to vary in time the dimensions of both the reduced basis space and the hyper-reduction space by monitoring the quality of the reduced solution via an error indicator related to the projection error of the Hamiltonian vector field. The resulting adaptive hyper-reduced models preserve the geometric structure of the Hamiltonian flow, do not rely on prior information on the dynamics, and can be solved at a cost that is linear in the dimension of the full order model and linear in the number of test parameters. Numerical experiments demonstrate the improved performances of the resulting fully adaptive models compared to the original and reduced order models.

Recent advancements in real image editing have been attributed to the exploration of Generative Adversarial Networks (GANs) latent space. However, the main challenge of this procedure is GAN inversion, which aims to map the image to the latent space accurately. Existing methods that work on extended latent space $W+$ are unable to achieve low distortion and high editability simultaneously. To address this issue, we propose an approach which works in native latent space $W$ and tunes the generator network to restore missing image details. We introduce a novel regularization strategy with learnable coefficients obtained by training randomized StyleGAN 2 model - WRanGAN. This method outperforms traditional approaches in terms of reconstruction quality and computational efficiency, achieving the lowest distortion with 4 times fewer parameters. Furthermore, we observe a slight improvement in the quality of constructing hyperplanes corresponding to binary image attributes. We demonstrate the effectiveness of our approach on two complex datasets: Flickr-Faces-HQ and LSUN Church.

In genetic studies, haplotype data provide more refined information than data about separate genetic markers. However, large-scale studies that genotype hundreds to thousands of individuals may only provide results of pooled data, where only the total allele counts of each marker in each pool are reported. Methods for inferring haplotype frequencies from pooled genetic data that scale well with pool size rely on a normal approximation, which we observe to produce unreliable inference when applied to real data. We illustrate cases where the approximation breaks down, due to the normal covariance matrix being near-singular. As an alternative to approximate methods, in this paper we propose exact methods to infer haplotype frequencies from pooled genetic data based on a latent multinomial model, where the observed allele counts are considered integer combinations of latent, unobserved haplotype counts. One of our methods, latent count sampling via Markov bases, achieves approximately linear runtime with respect to pool size. Our exact methods produce more accurate inference over existing approximate methods for synthetic data and for data based on haplotype information from the 1000 Genomes Project. We also demonstrate how our methods can be applied to time-series of pooled genetic data, as a proof of concept of how our methods are relevant to more complex hierarchical settings, such as spatiotemporal models.

This paper develops a new vascular respiratory motion compensation algorithm, Motion-Related Compensation (MRC), to conduct vascular respiratory motion compensation by extrapolating the correlation between invisible vascular and visible non-vascular. Robot-assisted vascular intervention can significantly reduce the radiation exposure of surgeons. In robot-assisted image-guided intervention, blood vessels are constantly moving/deforming due to respiration, and they are invisible in the X-ray images unless contrast agents are injected. The vascular respiratory motion compensation technique predicts 2D vascular roadmaps in live X-ray images. When blood vessels are visible after contrast agents injection, vascular respiratory motion compensation is conducted based on the sparse Lucas-Kanade feature tracker. An MRC model is trained to learn the correlation between vascular and non-vascular motions. During the intervention, the invisible blood vessels are predicted with visible tissues and the trained MRC model. Moreover, a Gaussian-based outlier filter is adopted for refinement. Experiments on in-vivo data sets show that the proposed method can yield vascular respiratory motion compensation in 0.032 sec, with an average error 1.086 mm. Our real-time and accurate vascular respiratory motion compensation approach contributes to modern vascular intervention and surgical robots.

We present ResMLP, an architecture built entirely upon multi-layer perceptrons for image classification. It is a simple residual network that alternates (i) a linear layer in which image patches interact, independently and identically across channels, and (ii) a two-layer feed-forward network in which channels interact independently per patch. When trained with a modern training strategy using heavy data-augmentation and optionally distillation, it attains surprisingly good accuracy/complexity trade-offs on ImageNet. We will share our code based on the Timm library and pre-trained models.

北京阿比特科技有限公司