亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Purpose- Coarctation of the Aorta (CoA) patient-specific computational fluid dynamics (CFD) studies in resource constrained settings are limited by the available imaging modalities for geometry and velocity data acquisition. Doppler echocardiography has been seen as a suitable velocity acquisition modality due to its higher availability and safety. This study aimed to investigate the application of classical machine learning (ML) methods to create an adequate and robust approach for obtaining boundary conditions (BCs) from Doppler Echocardiography images, for haemodynamic modeling using CFD. Methods- Our proposed approach combines ML and CFD to model haemodynamic flow within the region of interest. With the key feature of the approach being the use of ML models to calibrate the inlet and outlet boundary conditions (BCs) of the CFD model. The key input variable for the ML model was the patients heart rate as this was the parameter that varied in time across the measured vessels within the study. ANSYS Fluent was used for the CFD component of the study whilst the scikit-learn python library was used for the ML component. Results- We validated our approach against a real clinical case of severe CoA before intervention. The maximum coarctation velocity of our simulations were compared to the measured maximum coarctation velocity obtained from the patient whose geometry is used within the study. Of the 5 ML models used to obtain BCs the top model was within 5\% of the measured maximum coarctation velocity. Conclusion- The framework demonstrated that it was capable of taking variations of the patients heart rate between measurements into account. Thus, enabling the calculation of BCs that were physiologically realistic when the heart rate was scaled across each vessel whilst providing a reasonably accurate solution.

相關內容

This paper studies the problem of forecasting general stochastic processes using an extension of the Neural Jump ODE (NJ-ODE) framework. While NJ-ODE was the first framework to establish convergence guarantees for the prediction of irregularly observed time series, these results were limited to data stemming from It\^o-diffusions with complete observations, in particular Markov processes where all coordinates are observed simultaneously. In this work, we generalise these results to generic, possibly non-Markovian or discontinuous, stochastic processes with incomplete observations, by utilising the reconstruction properties of the signature transform. These theoretical results are supported by empirical studies, where it is shown that the path-dependent NJ-ODE outperforms the original NJ-ODE framework in the case of non-Markovian data. Moreover, we show that PD-NJ-ODE can be applied successfully to limit order book (LOB) data.

The use of synthetic (or simulated) data for training machine learning models has grown rapidly in recent years. Synthetic data can often be generated much faster and more cheaply than its real-world counterpart. One challenge of using synthetic imagery however is scene design: e.g., the choice of content and its features and spatial arrangement. To be effective, this design must not only be realistic, but appropriate for the target domain, which (by assumption) is unlabeled. In this work, we propose an approach to automatically choose the design of synthetic imagery based upon unlabeled real-world imagery. Our approach, termed Neural-Adjoint Meta-Simulation (NAMS), builds upon the seminal recent meta-simulation approaches. In contrast to the current state-of-the-art methods, our approach can be pre-trained once offline, and then provides fast design inference for new target imagery. Using both synthetic and real-world problems, we show that NAMS infers synthetic designs that match both the in-domain and out-of-domain target imagery, and that training segmentation models with NAMS-designed imagery yields superior results compared to na\"ive randomized designs and state-of-the-art meta-simulation methods.

We introduce two synthetic likelihood methods for Simulation-Based Inference (SBI), to conduct either amortized or targeted inference from experimental observations when a high-fidelity simulator is available. Both methods learn a conditional energy-based model (EBM) of the likelihood using synthetic data generated by the simulator, conditioned on parameters drawn from a proposal distribution. The learned likelihood can then be combined with any prior to obtain a posterior estimate, from which samples can be drawn using MCMC. Our methods uniquely combine a flexible Energy-Based Model and the minimization of a KL loss: this is in contrast to other synthetic likelihood methods, which either rely on normalizing flows, or minimize score-based objectives; choices that come with known pitfalls. Our first method, Amortized Unnormalized Neural Likelihood Estimation (AUNLE), introduces a tilting trick during training that allows to significantly lower the computational cost of inference by enabling the use of efficient MCMC techniques. Our second method, Sequential UNLE (SUNLE), employs a robust doubly intractable approach in order to re-use simulation data and improve posterior accuracy on a specific dataset. We demonstrate the properties of both methods on a range of synthetic datasets, and apply them to a neuroscience model of the pyloric network in the crab Cancer Borealis, matching the performance of other synthetic likelihood methods at a fraction of the simulation budget.

A sound field reproduction method called weighted pressure matching is proposed. Sound field reproduction is aimed at synthesizing the desired sound field using multiple loudspeakers inside a target region. Optimization-based methods are derived from the minimization of errors between synthesized and desired sound fields, which enable the use of an arbitrary array geometry in contrast with integral-equation-based methods. Pressure matching is widely used in the optimization-based sound field reproduction methods because of its simplicity of implementation. Its cost function is defined as the synthesis errors at multiple control points inside the target region; then, the driving signals of the loudspeakers are obtained by solving a least-squares problem. However, in pressure matching, the region between the control points is not taken into consideration. We define the cost function as the regional integration of the synthesis error over the target region. On the basis of the kernel interpolation of the sound field, this cost function is represented as the weighted square error of the synthesized pressures at the control points. Experimental results indicate that the proposed weighted pressure matching outperforms conventional pressure matching.

We introduce a new statistical test based on the observed spacings of ordered data. The statistic is sensitive to detect non-uniformity in random samples, or short-lived features in event time series. Under some conditions, this new test can outperform existing ones, such as the well known Kolmogorov-Smirnov or Anderson-Darling tests, in particular when the number of samples is small and differences occur over a small quantile of the null hypothesis distribution. A detailed description of the test statistic is provided including a detailed discussion of the parameterization of its distribution via asymptotic bootstrapping as well as a novel per-quantile error estimation of the empirical distribution. Two example applications are provided, using the test to boost the sensitivity in generic "bump hunting", and employing the test to detect supernovae. The article is rounded off with an extended performance comparison to other, established goodness-of-fit tests.

The high dimensionality of hyperspectral images consisting of several bands often imposes a big computational challenge for image processing. Therefore, spectral band selection is an essential step for removing the irrelevant, noisy and redundant bands. Consequently increasing the classification accuracy. However, identification of useful bands from hundreds or even thousands of related bands is a nontrivial task. This paper aims at identifying a small set of highly discriminative bands, for improving computational speed and prediction accuracy. Hence, we proposed a new strategy based on joint mutual information to measure the statistical dependence and correlation between the selected bands and evaluate the relative utility of each one to classification. The proposed filter approach is compared to an effective reproduced filters based on mutual information. Simulations results on the hyperpectral image HSI AVIRIS 92AV3C using the SVM classifier have shown that the effective proposed algorithm outperforms the reproduced filters strategy performance. Keywords-Hyperspectral images, Classification, band Selection, Joint Mutual Information, dimensionality reduction ,correlation, SVM.

Empirical detection of long range dependence (LRD) of a time series often consists of deciding whether an estimate of the memory parameter $d$ corresponds to LRD. Surprisingly, the literature offers numerous spectral domain estimators for $d$ but there are only a few estimators in the time domain. Moreover, the latter estimators are criticized for relying on visual inspection to determine an observation window $[n_1, n_2]$ for a linear regression to run on. Theoretically motivated choices of $n_1$ and $n_2$ are often missing for many time series models. In this paper, we take the well-known variance plot estimator and provide rigorous asymptotic conditions on $[n_1, n_2]$ to ensure the estimator's consistency under LRD. We establish these conditions for a large class of square-integrable time series models. This large class enables one to use the variance plot estimator to detect LRD for infinite-variance time series (after suitable transformation). Thus, detection of LRD for infinite-variance time series is another novelty of our paper. A simulation study indicates that the variance plot estimator can detect LRD better than the popular spectral domain GPH estimator.

Urban rail transit provides significant comprehensive benefits such as large traffic volume and high speed, serving as one of the most important components of urban traffic construction management and congestion solution. Using real passenger flow data of an Asian subway system from April to June of 2018, this work analyzes the space-time distribution of the passenger flow using short-term traffic flow prediction. Stations are divided into four types for passenger flow forecasting, and meteorological records are collected for the same period. Then, machine learning methods with different inputs are applied and multivariate regression is performed to evaluate the improvement effect of each weather element on passenger flow forecasting of representative metro stations on hourly basis. Our results show that by inputting weather variables the precision of prediction on weekends enhanced while the performance on weekdays only improved marginally, while the contribution of different elements of weather differ. Also, different categories of stations are affected differently by weather. This study provides a possible method to further improve other prediction models, and attests to the promise of data-driven analytics for optimization of short-term scheduling in transit management.

This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.

Medical image segmentation requires consensus ground truth segmentations to be derived from multiple expert annotations. A novel approach is proposed that obtains consensus segmentations from experts using graph cuts (GC) and semi supervised learning (SSL). Popular approaches use iterative Expectation Maximization (EM) to estimate the final annotation and quantify annotator's performance. Such techniques pose the risk of getting trapped in local minima. We propose a self consistency (SC) score to quantify annotator consistency using low level image features. SSL is used to predict missing annotations by considering global features and local image consistency. The SC score also serves as the penalty cost in a second order Markov random field (MRF) cost function optimized using graph cuts to derive the final consensus label. Graph cut obtains a global maximum without an iterative procedure. Experimental results on synthetic images, real data of Crohn's disease patients and retinal images show our final segmentation to be accurate and more consistent than competing methods.

北京阿比特科技有限公司