亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This article presents MuMFiM, an open source application for multiscale modeling of fibrous materials on massively parallel computers. MuMFiM uses two scales to represent fibrous materials such as biological network materials (extracellular matrix, connective tissue, etc.). It is designed to make use of multiple levels of parallelism, including distributed parallelism of the macro and microscales as well as GPU accelerated data-parallelism of the microscale. Scaling results of the GPU accelerated microscale show that solving microscale problems concurrently on the GPU can lead to a 1000x speedup over the solution of a single RVE on the GPU. In addition, we show nearly optimal strong and weak scaling results of MuMFiM on up to 128 nodes of AiMOS (Rensselaer Polytechnic Institute) which is composed of IBM AC922 nodes with 6 Volta V100 GPU and 2 20 core Power 9 CPUs each. We also show how MuMFiM can be used to solve problems of interest to the broader engineering community, in particular providing an example of the facet capsule ligament (FCL) of the human spine undergoing uniaxial extension.

相關內容

Challenges to reproducibility and replicability have gained widespread attention, driven by large replication projects with lukewarm success rates. A nascent work has emerged developing algorithms to estimate the replicability of published findings. The current study explores ways in which AI-enabled signals of confidence in research might be integrated into the literature search. We interview 17 PhD researchers about their current processes for literature search and ask them to provide feedback on a replicability estimation tool. Our findings suggest that participants tend to confuse replicability with generalizability and related concepts. Information about replicability can support researchers throughout the research design processes. However, the use of AI estimation is debatable due to the lack of explainability and transparency. The ethical implications of AI-enabled confidence assessment must be further studied before such tools could be widely accepted. We discuss implications for the design of technological tools to support scholarly activities and advance replicability.

The cosinor model is frequently used to represent gene expression given the 24 hour day-night cycle time at which a corresponding tissue sample is collected. However, the timing of many biological processes are based on individual-specific internal timing systems that are offset relative to day-night cycle time. When these offsets are unknown, they pose a challenge in performing statistical analyses with a cosinor model. To clarify, when sample collection times are mis-recorded, cosinor regression can yield attenuated parameter estimates, which would also attenuate test statistics. This attenuation bias would inflate type II error rates in identifying genes with oscillatory behavior. This paper proposes a heuristic method to account for unknown offsets when tissue samples are collected in a longitudinal design. Specifically, this method involves first estimating individual-specific cosinor models for each gene. The times of sample collection for that individual are then translated based on the estimated phase-shifts across every gene. Simulation studies confirm that this method mitigates bias in estimation and inference. Illustrations with real data from three circadian biology studies highlight that this method produces parameter estimates and inferences akin to those obtained when each individual's offset is known.

With the recent success of generative models in image and text, the evaluation of generative models has gained a lot of attention. Whereas most generative models are compared in terms of scalar values such as Frechet Inception Distance (FID) or Inception Score (IS), in the last years (Sajjadi et al., 2018) proposed a definition of precision-recall curve to characterize the closeness of two distributions. Since then, various approaches to precision and recall have seen the light (Kynkaanniemi et al., 2019; Naeem et al., 2020; Park & Kim, 2023). They center their attention on the extreme values of precision and recall, but apart from this fact, their ties are elusive. In this paper, we unify most of these approaches under the same umbrella, relying on the work of (Simon et al., 2019). Doing so, we were able not only to recover entire curves, but also to expose the sources of the accounted pitfalls of the concerned metrics. We also provide consistency results that go well beyond the ones presented in the corresponding literature. Last, we study the different behaviors of the curves obtained experimentally.

We present MULTIGAIN 2.0, a major extension to the controller synthesis tool MULTIGAIN, built on top of the probabilistic model checker PRISM. This new version extends MULTIGAIN's multi-objective capabilities, by allowing for the formal verification and synthesis of controllers for probabilistic systems with multi-dimensional long-run average reward structures, steady-state constraints, and linear temporal logic properties. Additionally, MULTIGAIN 2.0 can modify the underlying linear program to prevent unbounded-memory and other unintuitive solutions and visualizes Pareto curves, in the two- and three-dimensional cases, to facilitate trade-off analysis in multi-objective scenarios.

We present a novel approach for modeling bounded count time series data, by deriving accurate upper and lower bounds for the variance of a bounded count random variable while maintaining a fixed mean. Leveraging these bounds, we propose semiparametric mean and variance joint (MVJ) models utilizing a clipped-Laplace link function. These models offer a flexible and feasible structure for both mean and variance, accommodating various scenarios of under-dispersion, equi-dispersion, or over-dispersion in bounded time series. The proposed MVJ models feature a linear mean structure with positive regression coefficients summing to one and allow for negative regression cefficients and autocorrelations. We demonstrate that the autocorrelation structure of MVJ models mirrors that of an autoregressive moving-average (ARMA) process, provided the proposed clipped-Laplace link functions with nonnegative regression coefficients summing to one are utilized. We establish conditions ensuring the stationarity and ergodicity properties of the MVJ process, along with demonstrating the consistency and asymptotic normality of the conditional least squares estimators. To aid model selection and diagnostics, we introduce two model selection criteria and apply two model diagnostics statistics. Finally, we conduct simulations and real data analyses to investigate the finite-sample properties of the proposed MVJ models, providing insights into their efficacy and applicability in practical scenarios.

The structural properties of mechanical metamaterials are typically studied with two-scale methods based on computational homogenization. Because such materials have a complex microstructure, enriched schemes such as second-order computational homogenization are required to fully capture their non-linear behavior, which arises from non-local interactions due to the buckling or patterning of the microstructure. In the two-scale formulation, the effective behavior of the microstructure is captured with a representative volume element (RVE), and a homogenized effective continuum is considered on the macroscale. Although an effective continuum formulation is introduced, solving such two-scale models concurrently is still computationally demanding due to the many repeated solutions for each RVE at the microscale level. In this work, we propose a reduced-order model for the microscopic problem arising in second-order computational homogenization, using proper orthogonal decomposition and a novel hyperreduction method that is specifically tailored for this problem and inspired by the empirical cubature method. Two numerical examples are considered, in which the performance of the reduced-order model is carefully assessed by comparing its solutions with direct numerical simulations (entirely resolving the underlying microstructure) and the full second-order computational homogenization model. The reduced-order model is able to approximate the result of the full computational homogenization well, provided that the training data is representative for the problem at hand. Any remaining errors, when compared with the direct numerical simulation, can be attributed to the inherent approximation errors in the computational homogenization scheme. Regarding run times for one thread, speed-ups on the order of 100 are achieved with the reduced-order model as compared to direct numerical simulations.

We develop an inferential toolkit for analyzing object-valued responses, which correspond to data situated in general metric spaces, paired with Euclidean predictors within the conformal framework. To this end we introduce conditional profile average transport costs, where we compare distance profiles that correspond to one-dimensional distributions of probability mass falling into balls of increasing radius through the optimal transport cost when moving from one distance profile to another. The average transport cost to transport a given distance profile to all others is crucial for statistical inference in metric spaces and underpins the proposed conditional profile scores. A key feature of the proposed approach is to utilize the distribution of conditional profile average transport costs as conformity score for general metric space-valued responses, which facilitates the construction of prediction sets by the split conformal algorithm. We derive the uniform convergence rate of the proposed conformity score estimators and establish asymptotic conditional validity for the prediction sets. The finite sample performance for synthetic data in various metric spaces demonstrates that the proposed conditional profile score outperforms existing methods in terms of both coverage level and size of the resulting prediction sets, even in the special case of scalar and thus Euclidean responses. We also demonstrate the practical utility of conditional profile scores for network data from New York taxi trips and for compositional data reflecting energy sourcing of U.S. states.

Purpose: Radiologists are tasked with visually scrutinizing large amounts of data produced by 3D volumetric imaging modalities. Small signals can go unnoticed during the 3d search because they are hard to detect in the visual periphery. Recent advances in machine learning and computer vision have led to effective computer-aided detection (CADe) support systems with the potential to mitigate perceptual errors. Approach: Sixteen non-expert observers searched through digital breast tomosynthesis (DBT) phantoms and single cross-sectional slices of the DBT phantoms. The 3D/2D searches occurred with and without a convolutional neural network (CNN)-based CADe support system. The model provided observers with bounding boxes superimposed on the image stimuli while they looked for a small microcalcification signal and a large mass signal. Eye gaze positions were recorded and correlated with changes in the area under the ROC curve (AUC). Results: The CNN-CADe improved the 3D search for the small microcalcification signal (delta AUC = 0.098, p = 0.0002) and the 2D search for the large mass signal (delta AUC = 0.076, p = 0.002). The CNN-CADe benefit in 3D for the small signal was markedly greater than in 2D (delta delta AUC = 0.066, p = 0.035). Analysis of individual differences suggests that those who explored the least with eye movements benefited the most from the CNN-CADe (r = -0.528, p = 0.036). However, for the large signal, the 2D benefit was not significantly greater than the 3D benefit (delta delta AUC = 0.033, p = 0.133). Conclusion: The CNN-CADe brings unique performance benefits to the 3D (vs. 2D) search of small signals by reducing errors caused by the under-exploration of the volumetric data.

This work introduces a novel framework for dynamic factor model-based group-level analysis of multiple subjects time series data, called GRoup Integrative DYnamic factor (GRIDY) models. The framework identifies and characterizes inter-subject similarities and differences between two pre-determined groups by considering a combination of group spatial information and individual temporal dynamics. Furthermore, it enables the identification of intra-subject similarities and differences over time by employing different model configurations for each subject. Methodologically, the framework combines a novel principal angle-based rank selection algorithm and a non-iterative integrative analysis framework. Inspired by simultaneous component analysis, this approach also reconstructs identifiable latent factor series with flexible covariance structures. The performance of the GRIDY models is evaluated through simulations conducted under various scenarios. An application is also presented to compare resting-state functional MRI data collected from multiple subjects in autism spectrum disorder and control groups.

Recent advances in 3D fully convolutional networks (FCN) have made it feasible to produce dense voxel-wise predictions of volumetric images. In this work, we show that a multi-class 3D FCN trained on manually labeled CT scans of several anatomical structures (ranging from the large organs to thin vessels) can achieve competitive segmentation results, while avoiding the need for handcrafting features or training class-specific models. To this end, we propose a two-stage, coarse-to-fine approach that will first use a 3D FCN to roughly define a candidate region, which will then be used as input to a second 3D FCN. This reduces the number of voxels the second FCN has to classify to ~10% and allows it to focus on more detailed segmentation of the organs and vessels. We utilize training and validation sets consisting of 331 clinical CT images and test our models on a completely unseen data collection acquired at a different hospital that includes 150 CT scans, targeting three anatomical organs (liver, spleen, and pancreas). In challenging organs such as the pancreas, our cascaded approach improves the mean Dice score from 68.5 to 82.2%, achieving the highest reported average score on this dataset. We compare with a 2D FCN method on a separate dataset of 240 CT scans with 18 classes and achieve a significantly higher performance in small organs and vessels. Furthermore, we explore fine-tuning our models to different datasets. Our experiments illustrate the promise and robustness of current 3D FCN based semantic segmentation of medical images, achieving state-of-the-art results. Our code and trained models are available for download: //github.com/holgerroth/3Dunet_abdomen_cascade.

北京阿比特科技有限公司