亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Unscheduled treatment interruptions may lead to reduced quality of care in radiation therapy (RT). Identifying the RT prescription dose effects on the outcome of treatment interruptions, mediated through doses distributed into different organs-at-risk (OARs), can inform future treatment planning. The radiation exposure to OARs can be summarized by a matrix of dose-volume histograms (DVH) for each patient. Although various methods for high-dimensional mediation analysis have been proposed recently, few studies investigated how matrix-valued data can be treated as mediators. In this paper, we propose a novel Bayesian joint mediation model for high-dimensional matrix-valued mediators. In this joint model, latent features are extracted from the matrix-valued data through an adaptation of probabilistic multilinear principal components analysis (MPCA), retaining the inherent matrix structure. We derive and implement a Gibbs sampling algorithm to jointly estimate all model parameters, and introduce a Varimax rotation method to identify active indicators of mediation among the matrix-valued data. Our simulation study finds that the proposed joint model has higher efficiency in estimating causal decomposition effects compared to an alternative two-step method, and demonstrates that the mediation effects can be identified and visualized in the matrix form. We apply the method to study the effect of prescription dose on treatment interruptions in anal canal cancer patients.

相關內容

This paper proposes a novel slacks-based interval DEA approach that computes interval targets, slacks, and crisp inefficiency scores. It uses interval arithmetic and requires solving a mixed-integer linear program. The corresponding super-efficiency formulation to discriminate among the efficient units is also presented. We also provide a case study of its application to sustainable tourism in the Mediterranean region, assessing the sustainable tourism efficiency of twelve Mediterranean regions to validate the proposed approach. The inputs and outputs cover the three sustainability dimensions and include GHG emissions as an undesirable output. Three regions were found inefficient, and the corresponding inputs and output improvements were computed. A total rank of the regions was also obtained using the super-efficiency model.

We study regression adjustment with general function class approximations for estimating the average treatment effect in the design-based setting. Standard regression adjustment involves bias due to sample re-use, and this bias leads to behavior that is sub-optimal in the sample size, and/or imposes restrictive assumptions. Our main contribution is to introduce a novel decorrelation-based approach that circumvents these issues. We prove guarantees, both asymptotic and non-asymptotic, relative to the oracle functions that are targeted by a given regression adjustment procedure. We illustrate our method by applying it to various high-dimensional and non-parametric problems, exhibiting improved sample complexity and weakened assumptions relative to known approaches.

The key ingredient to retrieving a signal from its Fourier magnitudes, namely, to solve the phase retrieval problem, is an effective prior on the sought signal. In this paper, we study the phase retrieval problem under the prior that the signal lies in a semi-algebraic set. This is a very general prior as semi-algebraic sets include linear models, sparse models, and ReLU neural network generative models. The latter is the main motivation of this paper, due to the remarkable success of deep generative models in a variety of imaging tasks, including phase retrieval. We prove that almost all signals in R^N can be determined from their Fourier magnitudes, up to a sign, if they lie in a (generic) semi-algebraic set of dimension N/2. The same is true for all signals if the semi-algebraic set is of dimension N/4. We also generalize these results to the problem of signal recovery from the second moment in multi-reference alignment models with multiplicity free representations of compact groups. This general result is then used to derive improved sample complexity bounds for recovering band-limited functions on the sphere from their noisy copies, each acted upon by a random element of SO(3).

We consider the problem of target detection with a constant false alarm rate (CFAR). This constraint is crucial in many practical applications and is a standard requirement in classical composite hypothesis testing. In settings where classical approaches are computationally expensive or where only data samples are given, machine learning methodologies are advantageous. CFAR is less understood in these settings. To close this gap, we introduce a framework of CFAR constrained detectors. Theoretically, we prove that a CFAR constrained Bayes optimal detector is asymptotically equivalent to the classical generalized likelihood ratio test (GLRT). Practically, we develop a deep learning framework for fitting neural networks that approximate it. Experiments of target detection in different setting demonstrate that the proposed CFARnet allows a flexible tradeoff between CFAR and accuracy.

Despite decades of practice, finite-size errors in many widely used electronic structure theories for periodic systems remain poorly understood. For periodic systems using a general Monkhorst-Pack grid, there has been no comprehensive and rigorous analysis of the finite-size error in the Hartree-Fock theory (HF) and the second order M{\o}ller-Plesset perturbation theory (MP2), which are the simplest wavefunction based method, and the simplest post-Hartree-Fock method, respectively. Such calculations can be viewed as a multi-dimensional integral discretized with certain trapezoidal rules. Due to the Coulomb singularity, the integrand has many points of discontinuity in general, and standard error analysis based on the Euler-Maclaurin formula gives overly pessimistic results. The lack of analytic understanding of finite-size errors also impedes the development of effective finite-size correction schemes. We propose a unified analysis to obtain sharp convergence rates of finite-size errors for the periodic HF and MP2 theories. Our main technical advancement is a generalization of the result of [Lyness, 1976] for obtaining sharp convergence rates of the trapezoidal rule for a class of non-smooth integrands. Our result is applicable to three-dimensional bulk systems as well as low dimensional systems (such as nanowires and 2D materials). Our unified analysis also allows us to prove the effectiveness of the Madelung-constant correction to the Fock exchange energy, and the effectiveness of a recently proposed staggered mesh method for periodic MP2 calculations [Xing, Li, Lin, J. Chem. Theory Comput. 2021]. Our analysis connects the effectiveness of the staggered mesh method with integrands with removable singularities, and suggests a new staggered mesh method for reducing finite-size errors of periodic HF calculations.

The maximum likelihood estimator (MLE) is pivotal in statistical inference, yet its application is often hindered by the absence of closed-form solutions for many models. This poses challenges in real-time computation scenarios, particularly within embedded systems technology, where numerical methods are impractical. This study introduces a generalized form of the MLE that yields closed-form estimators under certain conditions. We derive the asymptotic properties of the proposed estimator and demonstrate that our approach retains key properties such as invariance under one-to-one transformations, strong consistency, and an asymptotic normal distribution. The effectiveness of the generalized MLE is exemplified through its application to the Gamma, Nakagami, and Beta distributions, showcasing improvements over the traditional MLE. Additionally, we extend this methodology to a bivariate gamma distribution, successfully deriving closed-form estimators. This advancement presents significant implications for real-time statistical analysis across various applications.

Selection models are ubiquitous in statistics. In recent years, they have regained considerable popularity as the working inferential models in many selective inference problems. In this paper, we derive an asymptotic expansion of the local likelihood ratios of selection models. We show that under mild regularity conditions, they are asymptotically equivalent to a sequence of Gaussian selection models. This generalizes the Local Asymptotic Normality framework of Le Cam (1960). Furthermore, we derive the asymptotic shape of Bayesian posterior distributions constructed from selection models, and show that they can be significantly miscalibrated in a frequentist sense.

Feedforward neural networks (FNNs) are typically viewed as pure prediction algorithms, and their strong predictive performance has led to their use in many machine-learning applications. However, their flexibility comes with an interpretability trade-off; thus, FNNs have been historically less popular among statisticians. Nevertheless, classical statistical theory, such as significance testing and uncertainty quantification, is still relevant. Supplementing FNNs with methods of statistical inference, and covariate-effect visualisations, can shift the focus away from black-box prediction and make FNNs more akin to traditional statistical models. This can allow for more inferential analysis, and, hence, make FNNs more accessible within the statistical-modelling context.

Quantum supervised learning, utilizing variational circuits, stands out as a promising technology for NISQ devices due to its efficiency in hardware resource utilization during the creation of quantum feature maps and the implementation of hardware-efficient ansatz with trainable parameters. Despite these advantages, the training of quantum models encounters challenges, notably the barren plateau phenomenon, leading to stagnation in learning during optimization iterations. This study proposes an innovative approach: an evolutionary-enhanced ansatz-free supervised learning model. In contrast to parametrized circuits, our model employs circuits with variable topology that evolves through an elitist method, mitigating the barren plateau issue. Additionally, we introduce a novel concept, the superposition of multi-hot encodings, facilitating the treatment of multi-classification problems. Our framework successfully avoids barren plateaus, resulting in enhanced model accuracy. Comparative analysis with variational quantum classifiers from the technology's state-of-the-art reveal a substantial improvement in training efficiency and precision. Furthermore, we conduct tests on a challenging dataset class, traditionally problematic for conventional kernel machines, demonstrating a potential alternative path for achieving quantum advantage in supervised learning for NISQ era.

We propose a novel and simple spectral method based on the semi-discrete Fourier transforms to discretize the fractional Laplacian $(-\Delta)^\frac{\alpha}{2}$. Numerical analysis and experiments are provided to study its performance. Our method has the same symbol $|\xi|^\alpha$ as the fractional Laplacian $(-\Delta)^\frac{\alpha}{2}$ at the discrete level, and thus it can be viewed as the exact discrete analogue of the fractional Laplacian. This {\it unique feature} distinguishes our method from other existing methods for the fractional Laplacian. Note that our method is different from the Fourier pseudospectral methods in the literature, which are usually limited to periodic boundary conditions (see Remark \ref{remark0}). Numerical analysis shows that our method can achieve a spectral accuracy. The stability and convergence of our method in solving the fractional Poisson equations were analyzed. Our scheme yields a multilevel Toeplitz stiffness matrix, and thus fast algorithms can be developed for efficient matrix-vector products. The computational complexity is ${\mathcal O}(2N\log(2N))$, and the memory storage is ${\mathcal O}(N)$ with $N$ the total number of points. Extensive numerical experiments verify our analytical results and demonstrate the effectiveness of our method in solving various problems.

北京阿比特科技有限公司