亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Graph-based two-sample tests and graph-based change-point detection that utilize a similarity graph provide a powerful tool for analyzing high-dimensional and non-Euclidean data as these methods do not impose distributional assumptions on data and have good performance across various scenarios. Current graph-based tests that deliver efficacy across a broad spectrum of alternatives typically reply on the $K$-nearest neighbor graph or the $K$-minimum spanning tree. However, these graphs can be vulnerable for high-dimensional data due to the curse of dimensionality. To mitigate this issue, we propose to use a robust graph that is considerably less influenced by the curse of dimensionality. We also establish a theoretical foundation for graph-based methods utilizing this proposed robust graph and demonstrate its consistency under fixed alternatives for both low-dimensional and high-dimensional data.

相關內容

維度災難是指在高維空間中(zhong)分析和組織數據時出現的各(ge)種(zhong)現象,這些現象在低(di)維設置(zhi)(例如日常體(ti)驗的三維物理空間)中(zhong)不會發生(sheng)。

We present a novel clustering algorithm, visClust, that is based on lower dimensional data representations and visual interpretation. Thereto, we design a transformation that allows the data to be represented by a binary integer array enabling the use of image processing methods to select a partition. Qualitative and quantitative analyses measured in accuracy and an adjusted Rand-Index show that the algorithm performs well while requiring low runtime and RAM. We compare the results to 6 state-of-the-art algorithms with available code, confirming the quality of visClust by superior performance in most experiments. Moreover, the algorithm asks for just one obligatory input parameter while allowing optimization via optional parameters. The code is made available on GitHub and straightforward to use.

Test-negative designs are widely used for post-market evaluation of vaccine effectiveness. Different from classical test-negative designs where only healthcare-seekers with symptoms are included, recent test-negative designs have involved individuals with various reasons for testing, especially in an outbreak setting. While including these data can increase sample size and hence improve precision, concerns have been raised about whether they will introduce bias into the current framework of test-negative designs, thereby demanding a formal statistical examination of this modified design. In this article, using statistical derivations, causal graphs, and numerical simulations, we show that the standard odds ratio estimator may be biased if various reasons for testing are not accounted for. To eliminate this bias, we identify three categories of reasons for testing, including symptoms, disease-unrelated reasons, and case contact tracing, and characterize associated statistical properties and estimands. Based on our characterization, we propose stratified estimators that can incorporate multiple reasons for testing to achieve consistent estimation and improve precision by maximizing the use of data. The performance of our proposed method is demonstrated through simulation studies.

PDDSparse is a new hybrid parallelisation scheme for solving large-scale elliptic boundary value problems on supercomputers, which can be described as a Feynman-Kac formula for domain decomposition. At its core lies a stochastic linear, sparse system for the solutions on the interfaces, whose entries are generated via Monte Carlo simulations. Assuming small statistical errors, we show that the random system matrix ${\tilde G}(\omega)$ is near a nonsingular M-matrix $G$, i.e. ${\tilde G}(\omega)+E=G$ where $||E||/||G||$ is small. Using nonstandard arguments, we bound $||G^{-1}||$ and the condition number of $G$, showing that both of them grow moderately with the degrees of freedom of the discretisation. Moreover, the truncated Neumann series of $G^{-1}$ -- which is straightforward to calculate -- is the basis for an excellent preconditioner for ${\tilde G}(\omega)$. These findings are supported by numerical evidence.

Probabilistic graphical models are widely used to model complex systems with uncertainty. Traditionally, Gaussian directed graphical models are applied for analysis of large networks with continuous variables since they can provide conditional and marginal distributions in closed form simplifying the inferential task. The Gaussianity and linearity assumptions are often adequate, yet can lead to poor performance when dealing with some practical applications. In this paper, we model each variable in graph G as a polynomial regression of its parents to capture complex relationships between individual variables and with utility function of polynomial form. Since the marginal posterior distributions of individual variables can become analytically intractable, we develop a message-passing algorithm to propagate information throughout the network solely using moments which enables the expected utility scores to be calculated exactly. We illustrate how the proposed methodology works in a decision problem in energy systems planning.

Robust Markov Decision Processes (RMDPs) are a widely used framework for sequential decision-making under parameter uncertainty. RMDPs have been extensively studied when the objective is to maximize the discounted return, but little is known for average optimality (optimizing the long-run average of the rewards obtained over time) and Blackwell optimality (remaining discount optimal for all discount factors sufficiently close to 1). In this paper, we prove several foundational results for RMDPs beyond the discounted return. We show that average optimal policies can be chosen stationary and deterministic for sa-rectangular RMDPs but, perhaps surprisingly, that history-dependent (Markovian) policies strictly outperform stationary policies for average optimality in s-rectangular RMDPs. We also study Blackwell optimality for sa-rectangular RMDPs, where we show that {\em approximate} Blackwell optimal policies always exist, although Blackwell optimal policies may not exist. We also provide a sufficient condition for their existence, which encompasses virtually any examples from the literature. We then discuss the connection between average and Blackwell optimality, and we describe several algorithms to compute the optimal average return. Interestingly, our approach leverages the connections between RMDPs and stochastic games.

Computer-aided molecular design (CAMD) studies quantitative structure-property relationships and discovers desired molecules using optimization algorithms. With the emergence of machine learning models, CAMD score functions may be replaced by various surrogates to automatically learn the structure-property relationships. Due to their outstanding performance on graph domains, graph neural networks (GNNs) have recently appeared frequently in CAMD. But using GNNs introduces new optimization challenges. This paper formulates GNNs using mixed-integer programming and then integrates this GNN formulation into the optimization and machine learning toolkit OMLT. To characterize and formulate molecules, we inherit the well-established mixed-integer optimization formulation for CAMD and propose symmetry-breaking constraints to remove symmetric solutions caused by graph isomorphism. In two case studies, we investigate fragment-based odorant molecular design with more practical requirements to test the compatibility and performance of our approaches.

In this work we make us of Livens principle (sometimes also referred to as Hamilton-Pontryagin principle) in order to obtain a novel structure-preserving integrator for mechanical systems. In contrast to the canonical Hamiltonian equations of motion, the Euler-Lagrange equations pertaining to Livens principle circumvent the need to invert the mass matrix. This is an essential advantage with respect to singular mass matrices, which can yield severe difficulties for the modelling and simulation of multibody systems. Moreover, Livens principle unifies both Lagrangian and Hamiltonian viewpoints on mechanics. Additionally, the present framework avoids the need to set up the system's Hamiltonian. The novel scheme algorithmically conserves a general energy function and aims at the preservation of momentum maps corresponding to symmetries of the system. We present an extension to mechanical systems subject to holonomic constraints. The performance of the newly devised method is studied in representative examples.

As technology continues to advance at a rapid pace, the prevalence of multivariate functional data (MFD) has expanded across diverse disciplines, spanning biology, climatology, finance, and numerous other fields of study. Although MFD are encountered in various fields, the development of methods for hypotheses on mean functions, especially the general linear hypothesis testing (GLHT) problem for such data has been limited. In this study, we propose and study a new global test for the GLHT problem for MFD, which includes the one-way FMANOVA, post hoc, and contrast analysis as special cases. The asymptotic null distribution of the test statistic is shown to be a chi-squared-type mixture dependent of eigenvalues of the heteroscedastic covariance functions. The distribution of the chi-squared-type mixture can be well approximated by a three-cumulant matched chi-squared-approximation with its approximation parameters estimated from the data. By incorporating an adjustment coefficient, the proposed test performs effectively irrespective of the correlation structure in the functional data, even when dealing with a relatively small sample size. Additionally, the proposed test is shown to be root-n consistent, that is, it has a nontrivial power against a local alternative. Simulation studies and a real data example demonstrate finite-sample performance and broad applicability of the proposed test.

The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.

In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.

北京阿比特科技有限公司