Conventional harvesting problems for natural resources often assume physiological homogeneity of the body length/weight among individuals. However, such assumptions generally are not valid in real-world problems, where heterogeneity plays an essential role in the planning of biological resource harvesting. Furthermore, it is difficult to observe heterogeneity directly from the available data. This paper presents a novel optimal control framework for the cost-efficient harvesting of biological resources for application in fisheries management. The heterogeneity is incorporated into the resource dynamics, which is the population dynamics in this case, through a probability density that can be distorted from the reality. Subsequently, the distortion, which is the model uncertainty, is penalized through a divergence, leading to a non-standard dynamic differential game wherein the Hamilton-Jacobi-Bellman-Isaacs (HJBI) equation has a unique nonlinear partial differential term. Here, the existence and uniqueness results of the HJBI equation are presented along with an explicit monotone finite difference method. Finally, the proposed optimal control is applied to a harvesting problem with recreationally, economically, and ecologically important fish species using collected field data.
This article presents factor copula approaches to model temporal dependency of non- Gaussian (continuous/discrete) longitudinal data. Factor copula models are canonical vine copulas which explain the underlying dependence structure of a multivariate data through latent variables, and therefore can be easily interpreted and implemented to unbalanced longitudinal data. We develop regression models for continuous, binary and ordinal longitudinal data including covariates, by using factor copula constructions with subject-specific latent variables. Considering homogeneous within-subject dependence, our proposed models allow for feasible parametric inference in moderate to high dimensional situations, using two-stage (IFM) estimation method. We assess the finite sample performance of the proposed models with extensive simulation studies. In the empirical analysis, the proposed models are applied for analysing different longitudinal responses of two real world data sets. Moreover, we compare the performances of these models with some widely used random effect models using standard model selection techniques and find substantial improvements. Our studies suggest that factor copula models can be good alternatives to random effect models and can provide better insights to temporal dependency of longitudinal data of arbitrary nature.
Predicting the long-term success of endovascular interventions in the clinical management of cerebral aneurysms requires detailed insight into the patient-specific physiological conditions. In this work, we not only propose numerical representations of endovascular medical devices such as coils, flow diverters or Woven EndoBridge but also outline numerical models for the prediction of blood flow patterns in the aneurysm cavity right after a surgical intervention. Detailed knowledge about the post-surgical state then lays the basis to assess the chances of a stable occlusion of the aneurysm required for a long-term treatment success. To this end, we propose mathematical and mechanical models of endovascular medical devices made out of thin metal wires. These can then be used for fully resolved flow simulations of the post-surgical blood flow, which in this work will be performed by means of a Lattice Boltzmann method applied to the incompressible Navier-Stokes equations and patient-specific geometries. To probe the suitability of homogenized models, we also investigate poro-elastic models to represent such medical devices. In particular, we examine the validity of this modeling approach for flow diverter placement across the opening of the aneurysm cavity. For both approaches, physiologically meaningful boundary conditions are provided from reduced-order models of the vascular system. The present study demonstrates our capabilities to predict the post-surgical state and lays a solid foundation to tackle the prediction of thrombus formation and, thus, the aneurysm occlusion in a next step.
Revealing hidden dynamics from the stochastic data is a challenging problem as randomness takes part in the evolution of the data. The problem becomes exceedingly complex when the trajectories of the stochastic data are absent in many scenarios. Here we present an approach to effectively modeling the dynamics of the stochastic data without trajectories based on the weak form of the Fokker-Planck (FP) equation, which governs the evolution of the density function in the Brownian process. Taking the collocations of Gaussian functions as the test functions in the weak form of the FP equation, we transfer the derivatives to the Gaussian functions and thus approximate the weak form by the expectational sum of the data. With a dictionary representation of the unknown terms, a linear system is built and then solved by the regression, revealing the unknown dynamics of the data. Hence, we name the method with the Weak Collocation Regression (WCR) method for its three key components: weak form, collocation of Gaussian kernels, and regression. The numerical experiments show that our method is flexible and fast, which reveals the dynamics within seconds in multi-dimensional problems and can be easily extended to high-dimensional data such as 20 dimensions. WCR can also correctly identify the hidden dynamics of the complex tasks with variable-dependent diffusion and coupled drift, and the performance is robust, achieving high accuracy in the case with noise added.
AI-enabled synthetic biology has tremendous potential but also significantly increases biorisks and brings about a new set of dual use concerns. The picture is complicated given the vast innovations envisioned to emerge by combining emerging technologies, as AI-enabled synthetic biology potentially scales up bioengineering into industrial biomanufacturing. However, the literature review indicates that goals such as maintaining a reasonable scope for innovation, or more ambitiously to foster a huge bioeconomy don't necessarily contrast with biosafety, but need to go hand in hand. This paper presents a literature review of the issues and describes emerging frameworks for policy and practice that transverse the options of command-and control, stewardship, bottom-up, and laissez-faire governance. How to achieve early warning systems that enable prevention and mitigation of future AI-enabled biohazards from the lab, from deliberate misuse, or from the public realm, will constantly need to evolve, and adaptive, interactive approaches should emerge. Although biorisk is subject to an established governance regime, and scientists generally adhere to biosafety protocols, even experimental, but legitimate use by scientists could lead to unexpected developments. Recent advances in chatbots enabled by generative AI have revived fears that advanced biological insight can more easily get into the hands of malignant individuals or organizations. Given these sets of issues, society needs to rethink how AI-enabled synthetic biology should be governed. The suggested way to visualize the challenge at hand is whack-a-mole governance, although the emerging solutions are perhaps not so different either.
We propose GrainGNN, a surrogate model for the evolution of polycrystalline grain structure under rapid solidification conditions in metal additive manufacturing. High fidelity simulations of solidification microstructures are typically performed using multicomponent partial differential equations (PDEs) with moving interfaces. The inherent randomness of the PDE initial conditions (grain seeds) necessitates ensemble simulations to predict microstructure statistics, e.g., grain size, aspect ratio, and crystallographic orientation. Currently such ensemble simulations are prohibitively expensive and surrogates are necessary. In GrainGNN, we use a dynamic graph to represent interface motion and topological changes due to grain coarsening. We use a reduced representation of the microstructure using hand-crafted features; we combine pattern finding and altering graph algorithms with two neural networks, a classifier (for topological changes) and a regressor (for interface motion). Both networks have an encoder-decoder architecture; the encoder has a multi-layer transformer long-short-term-memory architecture; the decoder is a single layer perceptron. We evaluate GrainGNN by comparing it to high-fidelity phase field simulations for in-distribution and out-of-distribution grain configurations for solidification under laser power bed fusion conditions. GrainGNN results in 80\%--90\% pointwise accuracy; and nearly identical distributions of scalar quantities of interest (QoI) between phase field and GrainGNN simulations compared using Kolmogorov-Smirnov test. GrainGNN's inference speedup (PyTorch on single x86 CPU) over a high-fidelity phase field simulation (CUDA on a single NVIDIA A100 GPU) is 150$\times$--2000$\times$ for 100-initial grain problem. Further, using GrainGNN, we model the formation of 11,600 grains in 220 seconds on a single CPU core.
Data generation remains a bottleneck in training surrogate models to predict molecular properties. We demonstrate that multitask Gaussian process regression overcomes this limitation by leveraging both expensive and cheap data sources. In particular, we consider training sets constructed from coupled-cluster (CC) and density function theory (DFT) data. We report that multitask surrogates can predict at CC level accuracy with a reduction to data generation cost by over an order of magnitude. Of note, our approach allows the training set to include DFT data generated by a heterogeneous mix of exchange-correlation functionals without imposing any artificial hierarchy on functional accuracy. More generally, the multitask framework can accommodate a wider range of training set structures -- including full disparity between the different levels of fidelity -- than existing kernel approaches based on $\Delta$-learning, though we show that the accuracy of the two approaches can be similar. Consequently, multitask regression can be a tool for reducing data generation costs even further by opportunistically exploiting existing data sources.
In statistical inference, retrodiction is the act of inferring potential causes in the past based on knowledge of the effects in the present and the dynamics leading to the present. Retrodiction is applicable even when the dynamics is not reversible, and it agrees with the reverse dynamics when it exists, so that retrodiction may be viewed as an extension of inversion, i.e., time-reversal. Recently, an axiomatic definition of retrodiction has been made in a way that is applicable to both classical and quantum probability using ideas from category theory. Almost simultaneously, a framework for information flow in in terms of semicartesian categories has been proposed in the setting of categorical probability theory. Here, we formulate a general definition of retrodiction to add to the information flow axioms in semicartesian categories, thus providing an abstract framework for retrodiction beyond classical and quantum probability theory. More precisely, we extend Bayesian inference, and more generally Jeffrey's probability kinematics, to arbitrary semicartesian categories.
Integration of technological solutions aims to improve accuracy, precision and repeatability in farming operations, and biosensor devices are increasingly used for understanding basic biology during livestock production. The aim of this study was to design and validate a miniaturized tri-axial accelerometer for non-invasive monitoring of farmed fish with re-programmable schedule protocols.The device was attached to the operculum of gilthead sea bream and European sea bass juveniles for monitoring their physical activity by measurements of movement accelerations in x and y-axes, while records of operculum beats served as a measurement of respiratory frequency. Data post-processing of exercised fish in swimming test chambers revealed an exponential increase of fish accelerations with the increase of fish speed from 1 body-length to 4 body-lengths per second, while a close relationship between oxygen consumption and opercular frequency was consistently found.The usefulness of low computational load for data pre-processing with on-board algorithms was verified from low to submaximal exercise, increasing this procedure the autonomy of the system up to 6 h of data recording with different programmable schedules. Visual observations regarding tissue damage, feeding behavior and circulating levels of stress markers did not reveal at short term a negative impact of device tagging. Reduced plasma levels of triglycerides revealed a transient inhibition of feed intake in small fish, but this disturbance was not detected in larger fish. All this considered together is the proof of concept that miniaturized devices are suitable for non-invasive and reliable metabolic phenotyping of farmed fish to improve their overall performance and welfare. Further work is underway for improving the attachment procedure and the full device packaging.
The optimisation of crop harvesting processes for commonly cultivated crops is of great importance in the aim of agricultural industrialisation. Nowadays, the utilisation of machine vision has enabled the automated identification of crops, leading to the enhancement of harvesting efficiency, but challenges still exist. This study presents a new framework that combines two separate architectures of convolutional neural networks (CNNs) in order to simultaneously accomplish the tasks of crop detection and harvesting (robotic manipulation) inside a simulated environment. Crop images in the simulated environment are subjected to random rotations, cropping, brightness, and contrast adjustments to create augmented images for dataset generation. The you only look once algorithmic framework is employed with traditional rectangular bounding boxes for crop localization. The proposed method subsequently utilises the acquired image data via a visual geometry group model in order to reveal the grasping positions for the robotic manipulation.
Recent advances in 3D fully convolutional networks (FCN) have made it feasible to produce dense voxel-wise predictions of volumetric images. In this work, we show that a multi-class 3D FCN trained on manually labeled CT scans of several anatomical structures (ranging from the large organs to thin vessels) can achieve competitive segmentation results, while avoiding the need for handcrafting features or training class-specific models. To this end, we propose a two-stage, coarse-to-fine approach that will first use a 3D FCN to roughly define a candidate region, which will then be used as input to a second 3D FCN. This reduces the number of voxels the second FCN has to classify to ~10% and allows it to focus on more detailed segmentation of the organs and vessels. We utilize training and validation sets consisting of 331 clinical CT images and test our models on a completely unseen data collection acquired at a different hospital that includes 150 CT scans, targeting three anatomical organs (liver, spleen, and pancreas). In challenging organs such as the pancreas, our cascaded approach improves the mean Dice score from 68.5 to 82.2%, achieving the highest reported average score on this dataset. We compare with a 2D FCN method on a separate dataset of 240 CT scans with 18 classes and achieve a significantly higher performance in small organs and vessels. Furthermore, we explore fine-tuning our models to different datasets. Our experiments illustrate the promise and robustness of current 3D FCN based semantic segmentation of medical images, achieving state-of-the-art results. Our code and trained models are available for download: //github.com/holgerroth/3Dunet_abdomen_cascade.