亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In the analysis of spatially resolved transcriptomics data, detecting spatially variable genes (SVGs) is crucial. Numerous computational methods exist, but varying SVG definitions and methodologies lead to incomparable results. We review \rv{33} state-of-the-art methods, categorizing SVGs into three types: overall, cell-type-specific, and spatial-domain-marker SVGs. Our review explains the intuitions underlying these methods, summarizes their applications, and categorizes the hypothesis tests they use in the trade-off between generality and specificity for SVG detection. We discuss challenges in SVG detection and propose future directions for improvement. Our review offers insights for method developers and users, advocating for category-specific benchmarking.

相關內容

This paper examines the reconstruction of a family of dynamical systems with neuromorphic behavior using a single scalar time series. A model of a physiological neuron based on the Hodgkin-Huxley formalism is considered. Single time series of one of its variables is shown to be enough to train a neural network that can operate as a discrete time dynamical system with one control parameter. The neural network system is created in two steps. First, the delay-coordinate embedding vectors are constructed form the original time series and their dimension is reduced with by means of a variational autoencoder to obtain the recovered state-space vectors. It is shown that an appropriate reduced dimension can be determined by analyzing the autoencoder training process. Second, pairs of the recovered state-space vectors at consecutive time steps supplied with a constant value playing the role of a control parameter are used to train another neural network to make it operate as a recurrent map. The regimes of thus created neural network system observed when its control parameter is varied are in very good accordance with those of the original system, though they were not explicitly presented during training.

We propose a method to generate statistically representative synthetic data from a given dataset. The main goal of our method is for the created data set to mimic the between feature correlations present in the original data, while also offering a tunable parameter to influence the privacy level. In particular, our method constructs a statistical map by using the empirical conditional distributions between the features of the original dataset. We describe in detail our algorithms used both in the construction of a statistical map and how to use this map to generate synthetic observations. This approach is tested in three different ways: with a hand calculated example; a manufactured dataset; and a real world energy-related dataset of consumption/production of households in Madeira Island. We test our method's performance by comparing the datasets using the on Pearson correlation matrix. The proposed methodology is general in the sense that it does not rely on the used test dataset. We expect it to be applicable in a much broader context than indicated here.

Understanding animal behaviour is central to predicting, understanding, and mitigating impacts of natural and anthropogenic changes on animal populations and ecosystems. However, the challenges of acquiring and processing long-term, ecologically relevant data in wild settings have constrained the scope of behavioural research. The increasing availability of Unmanned Aerial Vehicles (UAVs), coupled with advances in machine learning, has opened new opportunities for wildlife monitoring using aerial tracking. However, limited availability of datasets with wild animals in natural habitats has hindered progress in automated computer vision solutions for long-term animal tracking. Here we introduce BuckTales, the first large-scale UAV dataset designed to solve multi-object tracking (MOT) and re-identification (Re-ID) problem in wild animals, specifically the mating behaviour (or lekking) of blackbuck antelopes. Collected in collaboration with biologists, the MOT dataset includes over 1.2 million annotations including 680 tracks across 12 high-resolution (5.4K) videos, each averaging 66 seconds and featuring 30 to 130 individuals. The Re-ID dataset includes 730 individuals captured with two UAVs simultaneously. The dataset is designed to drive scalable, long-term animal behaviour tracking using multiple camera sensors. By providing baseline performance with two detectors, and benchmarking several state-of-the-art tracking methods, our dataset reflects the real-world challenges of tracking wild animals in socially and ecologically relevant contexts. In making these data widely available, we hope to catalyze progress in MOT and Re-ID for wild animals, fostering insights into animal behaviour, conservation efforts, and ecosystem dynamics through automated, long-term monitoring.

If the conclusion of a data analysis is sensitive to dropping very few data points, that conclusion might hinge on the particular data at hand rather than representing a more broadly applicable truth. How could we check whether this sensitivity holds? One idea is to consider every small subset of data, drop it from the dataset, and re-run our analysis. But running MCMC to approximate a Bayesian posterior is already very expensive; running multiple times is prohibitive, and the number of re-runs needed here is combinatorially large. Recent work proposes a fast and accurate approximation to find the worst-case dropped data subset, but that work was developed for problems based on estimating equations -- and does not directly handle Bayesian posterior approximations using MCMC. We make two principal contributions in the present work. We adapt the existing data-dropping approximation to estimators computed via MCMC. Observing that Monte Carlo errors induce variability in the approximation, we use a variant of the bootstrap to quantify this uncertainty. We demonstrate how to use our approximation in practice to determine whether there is non-robustness in a problem. Empirically, our method is accurate in simple models, such as linear regression. In models with complicated structure, such as hierarchical models, the performance of our method is mixed.

Cirquent calculus is a proof system with inherent ability to account for sharing subcomponents in logical expressions. Within its framework, this article constructs an axiomatization CL18 of the basic propositional fragment of computability logic the game-semantically conceived logic of computational resources and tasks. The nonlogical atoms of this fragment represent arbitrary so called static games, and the connectives of its logical vocabulary are negation and the parallel and choice versions of conjunction and disjunction. The main technical result of the article is a proof of the soundness and completeness of CL18 with respect to the semantics of computability logic.

Understanding the global organization of complicated and high dimensional data is of primary interest for many branches of applied sciences. It is typically achieved by applying dimensionality reduction techniques mapping the considered data into lower dimensional space. This family of methods, while preserving local structures and features, often misses the global structure of the dataset. Clustering techniques are another class of methods operating on the data in the ambient space. They group together points that are similar according to a fixed similarity criteria, however unlike dimensionality reduction techniques, they do not provide information about the global organization of the data. Leveraging ideas from Topological Data Analysis, in this paper we provide an additional layer on the output of any clustering algorithm. Such data structure, ClusterGraph, provides information about the global layout of clusters, obtained from the considered clustering algorithm. Appropriate measures are provided to assess the quality and usefulness of the obtained representation. Subsequently the ClusterGraph, possibly with an appropriate structure--preserving simplification, can be visualized and used in synergy with state of the art exploratory data analysis techniques.

After nearly two decades of research, the question of a quantum PCP theorem for quantum Constraint Satisfaction Problems (CSPs) remains wide open. As a result, proving QMA-hardness of approximation for ground state energy estimation has remained elusive. Recently, it was shown [Bittel, Gharibian, Kliesch, CCC 2023] that a natural problem involving variational quantum circuits is QCMA-hard to approximate within ratio N^(1-eps) for any eps > 0 and N the input size. Unfortunately, this problem was not related to quantum CSPs, leaving the question of hardness of approximation for quantum CSPs open. In this work, we show that if instead of focusing on ground state energies, one considers computing properties of the ground space, QCMA-hardness of computing ground space properties can be shown. In particular, we show that it is (1) QCMA-complete within ratio N^(1-eps) to approximate the Ground State Connectivity problem (GSCON), and (2) QCMA-hard within the same ratio to estimate the amount of entanglement of a local Hamiltonian's ground state, denoted Ground State Entanglement (GSE). As a bonus, a simplification of our construction yields NP-completeness of approximation for a natural k-SAT reconfiguration problem, to be contrasted with the recent PCP-based PSPACE hardness of approximation results for a different definition of k-SAT reconfiguration [Karthik C.S. and Manurangsi, 2023, and Hirahara, Ohsaka, STOC 2024].

We provide abstract, general and highly uniform rates of asymptotic regularity for a generalized stochastic Halpern-style iteration, which incorporates a second mapping in the style of a Krasnoselskii-Mann iteration. This iteration is general in two ways: First, it incorporates stochasticity in a completely abstract way rather than fixing a sampling method; secondly, it includes as special cases stochastic versions of various schemes from the optimization literature, including Halpern's iteration as well as a Krasnoselskii-Mann iteration with Tikhonov regularization terms in the sense of Bo\c{t}, Csetnek and Meier. For these particular cases, we in particular obtain linear rates of asymptotic regularity, matching (or improving) the currently best known rates for these iterations in stochastic optimization, and quadratic rates of asymptotic regularity are obtained in the context of inner product spaces for the general iteration. We utilize these rates to give bounds on the oracle complexity of such iterations under suitable variance assumptions and batching strategies, again presented in an abstract style. Finally, we sketch how the schemes presented here can be instantiated in the context of reinforcement learning to yield novel methods for Q-learning.

In decision-making, maxitive functions are used for worst-case and best-case evaluations. Maxitivity gives rise to a rich structure that is well-studied in the context of the pointwise order. In this article, we investigate maxitivity with respect to general preorders and provide a representation theorem for such functionals. The results are illustrated for different stochastic orders in the literature, including the usual stochastic order, the increasing convex/concave order, and the dispersive order.

A Riemannian geometric framework for Markov chain Monte Carlo (MCMC) is developed where using the Fisher-Rao metric on the manifold of probability density functions (pdfs), informed proposal densities for Metropolis-Hastings (MH) algorithms are constructed. We exploit the square-root representation of pdfs under which the Fisher-Rao metric boils down to the standard $L^2$ metric on the positive orthant of the unit hypersphere. The square-root representation allows us to easily compute the geodesic distance between densities, resulting in a straightforward implementation of the proposed geometric MCMC methodology. Unlike the random walk MH that blindly proposes a candidate state using no information about the target, the geometric MH algorithms move an uninformed base density (e.g., a random walk proposal density) towards different global/local approximations of the target density, allowing effective exploration of the distribution simultaneously at different granular levels of the state space. We compare the proposed geometric MH algorithm with other MCMC algorithms for various Markov chain orderings, namely the covariance, efficiency, Peskun, and spectral gap orderings. The superior performance of the geometric algorithms over other MH algorithms like the random walk Metropolis, independent MH, and variants of Metropolis adjusted Langevin algorithms is demonstrated in the context of various multimodal, nonlinear, and high dimensional examples. In particular, we use extensive simulation and real data applications to compare these algorithms for analyzing mixture models, logistic regression models, spatial generalized linear mixed models and ultra-high dimensional Bayesian variable selection models. A publicly available R package accompanies the article.

北京阿比特科技有限公司