亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The goal of radiation therapy for cancer is to deliver prescribed radiation dose to the tumor while minimizing dose to the surrounding healthy tissues. To evaluate treatment plans, the dose distribution to healthy organs is commonly summarized as dose-volume histograms (DVHs). Normal tissue complication probability (NTCP) modelling has centered around making patient-level risk predictions with features extracted from the DVHs, but few have considered adapting a causal framework to evaluate the safety of alternative treatment plans. We propose causal estimands for NTCP based on deterministic and stochastic interventions, as well as propose estimators based on marginal structural models that impose bivariable monotonicity between dose, volume, and toxicity risk. The properties of these estimators are studied through simulations, and their use is illustrated in the context of radiotherapy treatment of anal canal cancer patients.

相關內容

The exponential growth in scientific publications poses a severe challenge for human researchers. It forces attention to more narrow sub-fields, which makes it challenging to discover new impactful research ideas and collaborations outside one's own field. While there are ways to predict a scientific paper's future citation counts, they need the research to be finished and the paper written, usually assessing impact long after the idea was conceived. Here we show how to predict the impact of onsets of ideas that have never been published by researchers. For that, we developed a large evolving knowledge graph built from more than 21 million scientific papers. It combines a semantic network created from the content of the papers and an impact network created from the historic citations of papers. Using machine learning, we can predict the dynamic of the evolving network into the future with high accuracy, and thereby the impact of new research directions. We envision that the ability to predict the impact of new ideas will be a crucial component of future artificial muses that can inspire new impactful and interesting scientific ideas.

With the growing prevalence of diabetes and the associated public health burden, it is crucial to identify modifiable factors that could improve patients' glycemic control. In this work, we seek to examine associations between medication usage, concurrent comorbidities, and glycemic control, utilizing data from continuous glucose monitor (CGMs). CGMs provide interstitial glucose measurements, but reducing data to simple statistical summaries is common in clinical studies, resulting in substantial information loss. Recent advancements in the Frechet regression framework allow to utilize more information by treating the full distributional representation of CGM data as the response, while sparsity regularization enables variable selection. However, the methodology does not scale to large datasets. Crucially, variable selection inference using subsampling methods is computationally infeasible. We develop a new algorithm for sparse distributional regression by deriving a new explicit characterization of the gradient and Hessian of the underlying objective function, while also utilizing rotations on the sphere to perform feasible updates. The updated method is up to 10000-fold faster than the original approach, opening the door for applying sparse distributional regression to large-scale datasets and enabling previously unattainable subsampling-based inference. Applying our method to CGM data from patients with type 2 diabetes and obstructive sleep apnea, we found a significant association between sulfonylurea medication and glucose variability without evidence of association with glucose mean. We also found that overnight oxygen desaturation variability showed a stronger association with glucose regulation than overall oxygen desaturation levels.

Radon is a carcinogenic, radioactive gas that can accumulate indoors. Therefore, accurate knowledge of indoor radon concentration is crucial for assessing radon-related health effects or identifying radon-prone areas. Indoor radon concentration at the national scale is usually estimated on the basis of extensive measurement campaigns. However, characteristics of the sample often differ from the characteristics of the population due to the large number of relevant factors that control the indoor radon concentration such as the availability of geogenic radon or floor level. Furthermore, the sample size usually does not allow estimation with high spatial resolution. We propose a model-based approach that allows a more realistic estimation of indoor radon distribution with a higher spatial resolution than a purely data-based approach. A two-stage modelling approach was applied: 1) a quantile regression forest using environmental and building data as predictors was applied to estimate the probability distribution function of indoor radon for each floor level of each residential building in Germany; (2) a probabilistic Monte Carlo sampling technique enabled the combination and population weighting of floor-level predictions. In this way, the uncertainty of the individual predictions is effectively propagated into the estimate of variability at the aggregated level. The results show an approximate lognormal distribution with an arithmetic mean of 63 Bq/m3, a geometric mean of 41 Bq/m3 and a 95 %ile of 180 Bq/m3. The exceedance probability for 100 Bq/m3 and 300 Bq/m3 are 12.5 % (10.5 million people) and 2.2 % (1.9 million people), respectively.

This paper presents a method for thematic agreement assessment of geospatial data products of different semantics and spatial granularities, which may be affected by spatial offsets between test and reference data. The proposed method uses a multi-scale framework allowing for a probabilistic evaluation whether thematic disagreement between datasets is induced by spatial offsets due to different nature of the datasets or not. We test our method using real-estate derived settlement locations and remote-sensing derived building footprint data.

Many phenomena in real world social networks are interpreted as spread of influence between activated and non-activated network elements. These phenomena are formulated by combinatorial graphs, where vertices represent the elements and edges represent social ties between elements. A main problem is to study important subsets of elements (target sets or dynamic monopolies) such that their activation spreads to the entire network. In edge-weighted networks the influence between two adjacent vertices depends on the weight of their edge. In models with incentives, the main problem is to minimize total amount of incentives (called optimal target vectors) which can be offered to vertices such that some vertices are activated and their activation spreads to the whole network. Algorithmic study of target sets and vectors is a hot research field. We prove an inapproximability result for optimal target sets in edge weighted networks even for complete graphs. Some other hardness and polynomial time results are presented for optimal target vectors and degenerate threshold assignments in edge-weighted networks.

Exceptionally elegant formulae exist for the fractional Laplacian operator applied to weighted classical orthogonal polynomials. We utilize these results to construct a solver, based on frame properties, for equations involving the fractional Laplacian of any power, $s \in (0,1)$, on an unbounded domain in one or two dimensions. The numerical method represents solutions in an expansion of weighted classical orthogonal polynomials as well as their unweighted counterparts with a specific extension to $\mathbb{R}^d$, $d \in \{1,2\}$. We examine the frame properties of this family of functions for the solution expansion and, under standard frame conditions, derive an a priori estimate for the stationary equation. Moreover, we prove one achieves the expected order of convergence when considering an implicit Euler discretization in time for the fractional heat equation. We apply our solver to numerous examples including the fractional heat equation (utilizing up to a $6^\text{th}$-order Runge--Kutta time discretization), a fractional heat equation with a time-dependent exponent $s(t)$, and a two-dimensional problem, observing spectral convergence in the spatial dimension for sufficiently smooth data.

The fractional material derivative appears as the fractional operator that governs the dynamics of the scaling limits of L\'evy walks - a stochastic process that originates from the famous continuous-time random walks. It is usually defined as the Fourier-Laplace multiplier, therefore, it can be thought of as a pseudo-differential operator. In this paper, we show that there exists a local representation in time and space, pointwise, of the fractional material derivative. This allows us to define it on a space of locally integrable functions which is larger than the original one in which Fourier and Laplace transform exist as functions. We consider several typical differential equations involving the fractional material derivative and provide conditions for their solutions to exist. In some cases, the analytical solution can be found. For the general initial value problem, we devise a finite volume method and prove its stability, convergence, and conservation of probability. Numerical illustrations verify our analytical findings. Moreover, our numerical experiments show superiority in the computation time of the proposed numerical scheme over a Monte Carlo method applied to the problem of probability density function's derivation.

We consider a class of linear Vlasov partial differential equations driven by Wiener noise. Different types of stochastic perturbations are treated: additive noise, multiplicative It\^o and Stratonovich noise, and transport noise. We propose to employ splitting integrators for the temporal discretization of these stochastic partial differential equations. These integrators are designed in order to preserve qualitative properties of the exact solutions depending on the stochastic perturbation, such as preservation of norms or positivity of the solutions. We provide numerical experiments in order to illustrate the properties of the proposed integrators and investigate mean-square rates of convergence.

Although metaheuristics have been widely recognized as efficient techniques to solve real-world optimization problems, implementing them from scratch remains difficult for domain-specific experts without programming skills. In this scenario, metaheuristic optimization frameworks are a practical alternative as they provide a variety of algorithms composed of customized elements, as well as experimental support. Recently, many engineering problems require to optimize multiple or even many objectives, increasing the interest in appropriate metaheuristic algorithms and frameworks that might integrate new specific requirements while maintaining the generality and reusability principles they were conceived for. Based on this idea, this paper introduces JCLEC-MO, a Java framework for both multi- and many-objective optimization that enables engineers to apply, or adapt, a great number of multi-objective algorithms with little coding effort. A case study is developed and explained to show how JCLEC-MO can be used to address many-objective engineering problems, often requiring the inclusion of domain-specific elements, and to analyze experimental outcomes by means of conveniently connected R utilities.

The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.

北京阿比特科技有限公司