亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This report on axisymmetric ultraspherical/Gegenbauer polynomials and their use in Ambisonic directivity design in 2D and 3D presents an alternative mathematical formalism to what can be read in, e.g., my and Matthias Frank's book on Ambisonics or J\'er\^ome Daniel's thesis, Gary Elko's differential array book chapters, or Boaz Rafaely's spherical microphone array book. Ultraspherical/Gegenbauer polynomials are highly valuable when designing axisymmetric beams and understanding spherical t designs, and this report will shed some light on what circular, spherical, and ultraspherical axisymmetric polynomials are. While mathematically interesting by themselves already, they can be useful in spherical beamforming as described in the literature on spherical and differential microphone arrays. In this report, these ultraspherical/Gegenbauer polynomials will be used to uniformly derive for arbitrary dimensions D the various directivity designs or Ambisonic order weightings known from literature: max-DI/basic, max-rE , supercardioid, cardioid/inphase. Is there a way to relate higher-order cardioids and supercardioids? How could one define directivity patterns with an on-axis flatness constraint?

相關內容

We propose a material design method via gradient-based optimization on compositions, overcoming the limitations of traditional methods: exhaustive database searches and conditional generation models. It optimizes inputs via backpropagation, aligning the model's output closely with the target property and facilitating the discovery of unlisted materials and precise property determination. Our method is also capable of adaptive optimization under new conditions without retraining. Applying to exploring high-Tc superconductors, we identified potential compositions beyond existing databases and discovered new hydrogen superconductors via conditional optimization. This method is versatile and significantly advances material design by enabling efficient, extensive searches and adaptability to new constraints.

This paper presents, for the first time, the Mediterraneous protocol. It is designed to support the development of an Internet of digital services, owned by their creators, and consumed by users by presenting their decentralised digital identity and a proof of service purchase. Mediterraneous is Self-Sovereign Identity (SSI) native, integrating the SSI model at the core of its working principles to overcome the limitations resulting from using pseudonyms and centralised access control of existing Web3 solutions.

This paper introduces kDGLM, an R package designed for Bayesian analysis of Generalized Dynamic Linear Models (GDLM), with a primary focus on both uni- and multivariate exponential families. Emphasizing sequential inference for time series data, the kDGLM package provides comprehensive support for fitting, smoothing, monitoring, and feed-forward interventions. The methodology employed by kDGLM, as proposed in Alves et al. (2024), seamlessly integrates with well-established techniques from the literature, particularly those used in (Gaussian) Dynamic Models. These include discount strategies, autoregressive components, transfer functions, and more. Leveraging key properties of the Kalman filter and smoothing, kDGLM exhibits remarkable computational efficiency, enabling virtually instantaneous fitting times that scale linearly with the length of the time series. This characteristic makes it an exceptionally powerful tool for the analysis of extended time series. For example, when modeling monthly hospital admissions in Brazil due to gastroenteritis from 2010 to 2022, the fitting process took a mere 0.11s. Even in a spatial-time variant of the model (27 outcomes, 110 latent states, and 156 months, yielding 17,160 parameters), the fitting time was only 4.24s. Currently, the kDGLM package supports a range of distributions, including univariate Normal (unknown mean and observational variance), bivariate Normal (unknown means, observational variances, and correlation), Poisson, Gamma (known shape and unknown mean), and Multinomial (known number of trials and unknown event probabilities). Additionally, kDGLM allows the joint modeling of multiple time series, provided each series follows one of the supported distributions. Ongoing efforts aim to continuously expand the supported distributions.

Answering logical queries on knowledge graphs (KG) poses a significant challenge for machine reasoning. The primary obstacle in this task stems from the inherent incompleteness of KGs. Existing research has predominantly focused on addressing the issue of missing edges in KGs, thereby neglecting another aspect of incompleteness: the emergence of new entities. Furthermore, most of the existing methods tend to reason over each logical operator separately, rather than comprehensively analyzing the query as a whole during the reasoning process. In this paper, we propose a query-aware prompt-fused framework named Pro-QE, which could incorporate existing query embedding methods and address the embedding of emerging entities through contextual information aggregation. Additionally, a query prompt, which is generated by encoding the symbolic query, is introduced to gather information relevant to the query from a holistic perspective. To evaluate the efficacy of our model in the inductive setting, we introduce two new challenging benchmarks. Experimental results demonstrate that our model successfully handles the issue of unseen entities in logical queries. Furthermore, the ablation study confirms the efficacy of the aggregator and prompt components.

Symplectic integrators are widely implemented numerical integrators for Hamiltonian mechanics, which preserve the Hamiltonian structure (symplecticity) of the system. Although the symplectic integrator does not conserve the energy of the system, it is well known that there exists a conserving modified Hamiltonian, called the shadow Hamiltonian. For the Nambu mechanics, which is one of the generalized Hamiltonian mechanics, we can also construct structure-preserving integrators by the same procedure used to construct the symplectic integrators. In the structure-preserving integrator, however, the existence of shadow Hamiltonians is non-trivial. This is because the Nambu mechanics is driven by multiple Hamiltonians and it is non-trivial whether the time evolution by the integrator can be cast into the Nambu mechanical time evolution driven by multiple shadow Hamiltonians. In the present paper we construct structure-preserving integrators for a simple Nambu mechanical system, and derive the shadow Hamiltonians in two ways. This is the first attempt to derive shadow Hamiltonians of structure-preserving integrators for Nambu mechanics. We show that the fundamental identity, which corresponds to the Jacobi identity in Hamiltonian mechanics, plays an important role to calculate the shadow Hamiltonians using the Baker-Campbell-Hausdorff formula. It turns out that the resulting shadow Hamiltonians have indefinite forms depending on how the fundamental identities are used. This is not a technical artifact, because the exact shadow Hamiltonians obtained independently have the same indefiniteness.

Instance segmentation is a fundamental task in computer vision with broad applications across various industries. In recent years, with the proliferation of deep learning and artificial intelligence applications, how to train effective models with limited data has become a pressing issue for both academia and industry. In the Visual Inductive Priors challenge (VIPriors2023), participants must train a model capable of precisely locating individuals on a basketball court, all while working with limited data and without the use of transfer learning or pre-trained models. We propose Memory effIciency inStance Segmentation framework based on visual inductive prior flow propagation that effectively incorporates inherent prior information from the dataset into both the data preprocessing and data augmentation stages, as well as the inference phase. Our team (ACVLAB) experiments demonstrate that our model achieves promising performance (0.509 [email protected]:0.95) even under limited data and memory constraints.

The use of Artificial Intelligence (AI) based on data-driven algorithms has become ubiquitous in today's society. Yet, in many cases and especially when stakes are high, humans still make final decisions. The critical question, therefore, is whether AI helps humans make better decisions as compared to a human alone or AI an alone. We introduce a new methodological framework that can be used to answer experimentally this question with no additional assumptions. We measure a decision maker's ability to make correct decisions using standard classification metrics based on the baseline potential outcome. We consider a single-blinded experimental design, in which the provision of AI-generated recommendations is randomized across cases with a human making final decisions. Under this experimental design, we show how to compare the performance of three alternative decision-making systems--human-alone, human-with-AI, and AI-alone. We apply the proposed methodology to the data from our own randomized controlled trial of a pretrial risk assessment instrument. We find that AI recommendations do not improve the classification accuracy of a judge's decision to impose cash bail. Our analysis also shows that AI-alone decisions generally perform worse than human decisions with or without AI assistance. Finally, AI recommendations tend to impose cash bail on non-white arrestees more often than necessary when compared to white arrestees.

This work presents the cubature scheme for the fitting of spatio-temporal Poisson point processes. The methodology is implemented in the R Core Team (2024) package stopp (D'Angelo and Adelfio, 2023), published on the Comprehensive R Archive Network (CRAN) and available from //CRAN.R-project.org/package=stopp. Since the number of dummy points should be sufficient for an accurate estimate of the likelihood, numerical experiments are currently under development to give guidelines on this aspect.

This paper develops an in-depth treatment concerning the problem of approximating the Gaussian smoothing and Gaussian derivative computations in scale-space theory for application on discrete data. With close connections to previous axiomatic treatments of continuous and discrete scale-space theory, we consider three main ways discretizing these scale-space operations in terms of explicit discrete convolutions, based on either (i) sampling the Gaussian kernels and the Gaussian derivative kernels, (ii) locally integrating the Gaussian kernels and the Gaussian derivative kernels over each pixel support region and (iii) basing the scale-space analysis on the discrete analogue of the Gaussian kernel, and then computing derivative approximations by applying small-support central difference operators to the spatially smoothed image data. We study the properties of these three main discretization methods both theoretically and experimentally, and characterize their performance by quantitative measures, including the results they give rise to with respect to the task of scale selection, investigated for four different use cases, and with emphasis on the behaviour at fine scales. The results show that the sampled Gaussian kernels and derivatives as well as the integrated Gaussian kernels and derivatives perform very poorly at very fine scales. At very fine scales, the discrete analogue of the Gaussian kernel with its corresponding discrete derivative approximations performs substantially better. The sampled Gaussian kernel and the sampled Gaussian derivatives do, on the other hand, lead to numerically very good approximations of the corresponding continuous results, when the scale parameter is sufficiently large, in the experiments presented in the paper, when the scale parameter is greater than a value of about 1, in units of the grid spacing.

Deep neural network based recommendation systems have achieved great success as information filtering techniques in recent years. However, since model training from scratch requires sufficient data, deep learning-based recommendation methods still face the bottlenecks of insufficient data and computational inefficiency. Meta-learning, as an emerging paradigm that learns to improve the learning efficiency and generalization ability of algorithms, has shown its strength in tackling the data sparsity issue. Recently, a growing number of studies on deep meta-learning based recommenddation systems have emerged for improving the performance under recommendation scenarios where available data is limited, e.g. user cold-start and item cold-start. Therefore, this survey provides a timely and comprehensive overview of current deep meta-learning based recommendation methods. Specifically, we propose a taxonomy to discuss existing methods according to recommendation scenarios, meta-learning techniques, and meta-knowledge representations, which could provide the design space for meta-learning based recommendation methods. For each recommendation scenario, we further discuss technical details about how existing methods apply meta-learning to improve the generalization ability of recommendation models. Finally, we also point out several limitations in current research and highlight some promising directions for future research in this area.

北京阿比特科技有限公司