1. Temporal trends in species distributions are necessary for monitoring changes in biodiversity, which aids policymakers and conservationists in making informed decisions. Dynamic species distribution models are often fitted to ecological time series data using Markov Chain Monte Carlo algorithms to produce these temporal trends. However, the fitted models can be time-consuming to produce and run, making it inefficient to refit them as new observations become available. 2. We propose an algorithm that updates model parameters and the latent state distribution (e.g. true occupancy) using the saved information from a previously fitted model. This algorithm capitalises on the strength of importance sampling to generate new posterior samples of interest by updating the model output. The algorithm was validated with simulation studies on linear Gaussian state space models and occupancy models, and we applied the framework to Crested Tits in Switzerland and Yellow Meadow Ants in the UK. 3. We found that models updated with the proposed algorithm captured the true model parameters and latent state values as good as the models refitted to the expanded dataset. Moreover, the updated models were much faster to run and preserved the trajectory of the derived quantities. 4. The proposed approach serves as an alternative to conventional methods for updating state-space models (SSMs), and it is most beneficial when the fitted SSMs have a long run time. Overall, we provide a Monte Carlo algorithm to efficiently update complex models, a key issue in developing biodiversity models and indicators.
With the rising popularity of the internet and the widespread use of networks and information systems via the cloud and data centers, the privacy and security of individuals and organizations have become extremely crucial. In this perspective, encryption consolidates effective technologies that can effectively fulfill these requirements by protecting public information exchanges. To achieve these aims, the researchers used a wide assortment of encryption algorithms to accommodate the varied requirements of this field, as well as focusing on complex mathematical issues during their work to substantially complicate the encrypted communication mechanism. as much as possible to preserve personal information while significantly reducing the possibility of attacks. Depending on how complex and distinct the requirements established by these various applications are, the potential of trying to break them continues to occur, and systems for evaluating and verifying the cryptographic algorithms implemented continue to be necessary. The best approach to analyzing an encryption algorithm is to identify a practical and efficient technique to break it or to learn ways to detect and repair weak aspects in algorithms, which is known as cryptanalysis. Experts in cryptanalysis have discovered several methods for breaking the cipher, such as discovering a critical vulnerability in mathematical equations to derive the secret key or determining the plaintext from the ciphertext. There are various attacks against secure cryptographic algorithms in the literature, and the strategies and mathematical solutions widely employed empower cryptanalysts to demonstrate their findings, identify weaknesses, and diagnose maintenance failures in algorithms.
This essay provides a comprehensive analysis of the optimization and performance evaluation of various routing algorithms within the context of computer networks. Routing algorithms are critical for determining the most efficient path for data transmission between nodes in a network. The efficiency, reliability, and scalability of a network heavily rely on the choice and optimization of its routing algorithm. This paper begins with an overview of fundamental routing strategies, including shortest path, flooding, distance vector, and link state algorithms, and extends to more sophisticated techniques.
Digital credentials represent a cornerstone of digital identity on the Internet. To achieve privacy, certain functionalities in credentials should be implemented. One is selective disclosure, which allows users to disclose only the claims or attributes they want. This paper presents a novel approach to selective disclosure that combines Merkle hash trees and Boneh-Lynn-Shacham (BLS) signatures. Combining these approaches, we achieve selective disclosure of claims in a single credential and creation of a verifiable presentation containing selectively disclosed claims from multiple credentials signed by different parties. Besides selective disclosure, we enable issuing credentials signed by multiple issuers using this approach.
Living organisms interact with their surroundings in a closed-loop fashion, where sensory inputs dictate the initiation and termination of behaviours. Even simple animals are able to develop and execute complex plans, which has not yet been replicated in robotics using pure closed-loop input control. We propose a solution to this problem by defining a set of discrete and temporary closed-loop controllers, called "tasks", each representing a closed-loop behaviour. We further introduce a supervisory module which has an innate understanding of physics and causality, through which it can simulate the execution of task sequences over time and store the results in a model of the environment. On the basis of this model, plans can be made by chaining temporary closed-loop controllers. The proposed framework was implemented for a real robot and tested in two scenarios as proof of concept.
Validation metrics are key for the reliable tracking of scientific progress and for bridging the current chasm between artificial intelligence (AI) research and its translation into practice. However, increasing evidence shows that particularly in image analysis, metrics are often chosen inadequately in relation to the underlying research problem. This could be attributed to a lack of accessibility of metric-related knowledge: While taking into account the individual strengths, weaknesses, and limitations of validation metrics is a critical prerequisite to making educated choices, the relevant knowledge is currently scattered and poorly accessible to individual researchers. Based on a multi-stage Delphi process conducted by a multidisciplinary expert consortium as well as extensive community feedback, the present work provides the first reliable and comprehensive common point of access to information on pitfalls related to validation metrics in image analysis. Focusing on biomedical image analysis but with the potential of transfer to other fields, the addressed pitfalls generalize across application domains and are categorized according to a newly created, domain-agnostic taxonomy. To facilitate comprehension, illustrations and specific examples accompany each pitfall. As a structured body of information accessible to researchers of all levels of expertise, this work enhances global comprehension of a key topic in image analysis validation.
Numerical simulation of moving immersed solid bodies in fluids is now practiced routinely following pioneering work of Peskin and co-workers on immersed boundary method (IBM), Glowinski and co-workers on fictitious domain method (FDM), and others on related methods. A variety of variants of IBM and FDM approaches have been published, most of which rely on using a background mesh for the fluid equations and tracking the solid body using Lagrangian points. The key idea that is common to these methods is to assume that the entire fluid-solid domain is a fluid and then to constrain the fluid within the solid domain to move in accordance with the solid governing equations. The immersed solid body can be rigid or deforming. Thus, in all these methods the fluid domain is extended into the solid domain. In this review, we provide a mathemarical perspective of various immersed methods by recasting the governing equations in an extended domain form for the fluid. The solid equations are used to impose appropriate constraints on the fluid that is extended into the solid domain. This leads to extended domain constrained fluid-solid governing equations that provide a unified framework for various immersed body techniques. The unified constrained governing equations in the strong form are independent of the temporal or spatial discretization schemes. We show that particular choices of time stepping and spatial discretization lead to different techniques reported in literature ranging from freely moving rigid to elastic self-propelling bodies. These techniques have wide ranging applications including aquatic locomotion, underwater vehicles, car aerodynamics, and organ physiology (e.g. cardiac flow, esophageal transport, respiratory flows), wave energy convertors, among others. We conclude with comments on outstanding challenges and future directions.
Mendelian randomization uses genetic variants as instrumental variables to make causal inferences about the effects of modifiable risk factors on diseases from observational data. One of the major challenges in Mendelian randomization is that many genetic variants are only modestly or even weakly associated with the risk factor of interest, a setting known as many weak instruments. Many existing methods, such as the popular inverse-variance weighted (IVW) method, could be biased when the instrument strength is weak. To address this issue, the debiased IVW (dIVW) estimator, which is shown to be robust to many weak instruments, was recently proposed. However, this estimator still has non-ignorable bias when the effective sample size is small. In this paper, we propose a modified debiased IVW (mdIVW) estimator by multiplying a modification factor to the original dIVW estimator. After this simple correction, we show that the bias of the mdIVW estimator converges to zero at a faster rate than that of the dIVW estimator under some regularity conditions. Moreover, the mdIVW estimator has smaller variance than the dIVW estimator.We further extend the proposed method to account for the presence of instrumental variable selection and balanced horizontal pleiotropy. We demonstrate the improvement of the mdIVW estimator over the dIVW estimator through extensive simulation studies and real data analysis.
Finite sample inference for Cox models is an important problem in many settings, such as clinical trials. Bayesian procedures provide a means for finite sample inference and incorporation of prior information if MCMC algorithms and posteriors are well behaved. On the other hand, estimation procedures should also retain inferential properties in high dimensional settings. In addition, estimation procedures should be able to incorporate constraints and multilevel modeling such as cure models and frailty models in a straightforward manner. In order to tackle these modeling challenges, we propose a uniformly ergodic Gibbs sampler for a broad class of convex set constrained multilevel Cox models. We develop two key strategies. First, we exploit a connection between Cox models and negative binomial processes through the Poisson process to reduce Bayesian computation to iterative Gaussian sampling. Next, we appeal to sufficient dimension reduction to address the difficult computation of nonparametric baseline hazards, allowing for the collapse of the Markov transition operator within the Gibbs sampler based on sufficient statistics. We demonstrate our approach using open source data and simulations.
In this investigation, the distribution of the ratio of two independently distributed xgamma (Sen et al. 2016) random variables X and Y , with different parameters, is proposed and studied. The related distributional properties such as, moments, entropy measures, are investigated. We have also shown a unique characterization of the proposed distribution based on truncated incomplete moments.
Knowledge graphs (KGs) of real-world facts about entities and their relationships are useful resources for a variety of natural language processing tasks. However, because knowledge graphs are typically incomplete, it is useful to perform knowledge graph completion or link prediction, i.e. predict whether a relationship not in the knowledge graph is likely to be true. This paper serves as a comprehensive survey of embedding models of entities and relationships for knowledge graph completion, summarizing up-to-date experimental results on standard benchmark datasets and pointing out potential future research directions.