亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study the multiplicative hazards model with intermittently observed longitudinal covariates and time-varying coefficients. For such models, the existing {\it ad hoc} approach, such as the last value carried forward, is biased. We propose a kernel weighting approach to get an unbiased estimation of the non-parametric coefficient function and establish asymptotic normality for any fixed time point. Furthermore, we construct the simultaneous confidence band to examine the overall magnitude of the variation. Simulation studies support our theoretical predictions and show favorable performance of the proposed method. A data set from cerebral infarction is used to illustrate our methodology.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · TR · 數據分析 · 樣本復雜度 · 樣本 ·
2023 年 12 月 12 日

We provide more sample-efficient versions of some basic routines in quantum data analysis, along with simpler proofs. Particularly, we give a quantum "Threshold Search" algorithm that requires only $O((\log^2 m)/\epsilon^2)$ samples of a $d$-dimensional state $\rho$. That is, given observables $0 \le A_1, A_2, ..., A_m \le 1$ such that $\mathrm{tr}(\rho A_i) \ge 1/2$ for at least one $i$, the algorithm finds $j$ with $\mathrm{tr}(\rho A_j) \ge 1/2-\epsilon$. As a consequence, we obtain a Shadow Tomography algorithm requiring only $\tilde{O}((\log^2 m)(\log d)/\epsilon^4)$ samples, which simultaneously achieves the best known dependence on each parameter $m$, $d$, $\epsilon$. This yields the same sample complexity for quantum Hypothesis Selection among $m$ states; we also give an alternative Hypothesis Selection method using $\tilde{O}((\log^3 m)/\epsilon^2)$ samples.

To synthesize high-fidelity samples, diffusion models typically require auxiliary data to guide the generation process. However, it is impractical to procure the painstaking patch-level annotation effort required in specialized domains like histopathology and satellite imagery; it is often performed by domain experts and involves hundreds of millions of patches. Modern-day self-supervised learning (SSL) representations encode rich semantic and visual information. In this paper, we posit that such representations are expressive enough to act as proxies to fine-grained human labels. We introduce a novel approach that trains diffusion models conditioned on embeddings from SSL. Our diffusion models successfully project these features back to high-quality histopathology and remote sensing images. In addition, we construct larger images by assembling spatially consistent patches inferred from SSL embeddings, preserving long-range dependencies. Augmenting real data by generating variations of real images improves downstream classifier accuracy for patch-level and larger, image-scale classification tasks. Our models are effective even on datasets not encountered during training, demonstrating their robustness and generalizability. Generating images from learned embeddings is agnostic to the source of the embeddings. The SSL embeddings used to generate a large image can either be extracted from a reference image, or sampled from an auxiliary model conditioned on any related modality (e.g. class labels, text, genomic data). As proof of concept, we introduce the text-to-large image synthesis paradigm where we successfully synthesize large pathology and satellite images out of text descriptions.

We consider the estimation of generalized additive models using basis expansions coupled with Bayesian model selection. Although Bayesian model selection is an intuitively appealing tool for regression splines, its use has traditionally been limited to Gaussian additive regression because of the availability of a tractable form of the marginal model likelihood. We extend the method to encompass the exponential family of distributions using the Laplace approximation to the likelihood. Although the approach exhibits success with any Gaussian-type prior distribution, there remains a lack of consensus regarding the best prior distribution for nonparametric regression through model selection. We observe that the classical unit information prior distribution for variable selection may not be well-suited for nonparametric regression using basis expansions. Instead, our investigation reveals that mixtures of g-priors are more suitable. We consider various mixtures of g-priors to evaluate the performance in estimating generalized additive models. Furthermore, we conduct a comparative analysis of several priors for knots to identify the most practically effective strategy. Our extensive simulation studies demonstrate the superiority of model selection-based approaches over other Bayesian methods.

Neural operators have been validated as promising deep surrogate models for solving partial differential equations (PDEs). Despite the critical role of boundary conditions in PDEs, however, only a limited number of neural operators robustly enforce these conditions. In this paper we introduce semi-periodic Fourier neural operator (SPFNO), a novel spectral operator learning method, to learn the target operators of PDEs with non-periodic BCs. This method extends our previous work (arXiv:2206.12698), which showed significant improvements by employing enhanced neural operators that precisely satisfy the boundary conditions. However, the previous work is associated with Gaussian grids, restricting comprehensive comparisons across most public datasets. Additionally, we present numerical results for various PDEs such as the viscous Burgers' equation, Darcy flow, incompressible pipe flow, and coupled reactiondiffusion equations. These results demonstrate the computational efficiency, resolution invariant property, and BC-satisfaction behavior of proposed model. An accuracy improvement of approximately 1.7X-4.7X over the non-BC-satisfying baselines is also achieved. Furthermore, our studies on SOL underscore the significance of satisfying BCs as a criterion for deep surrogate models of PDEs.

Gaussian graphical models are useful tools for conditional independence structure inference of multivariate random variables. Unfortunately, Bayesian inference of latent graph structures is challenging due to exponential growth of $\mathcal{G}_n$, the set of all graphs in $n$ vertices. One approach that has been proposed to tackle this problem is to limit search to subsets of $\mathcal{G}_n$. In this paper, we study subsets that are vector subspaces with the cycle space $\mathcal{C}_n$ as main example. We propose a novel prior on $\mathcal{C}_n$ based on linear combinations of cycle basis elements and present its theoretical properties. Using this prior, we implement a Markov chain Monte Carlo algorithm, and show that (i) posterior edge inclusion estimates computed with our technique are comparable to estimates from the standard technique despite searching a smaller graph space, and (ii) the vector space perspective enables straightforward implementation of MCMC algorithms.

We develop adaptive time-stepping strategies for It\^o-type stochastic differential equations (SDEs) with jump perturbations. Our approach builds on adaptive strategies for SDEs. Adaptive methods can ensure strong convergence of nonlinear SDEs with drift and diffusion coefficients that violate global Lipschitz bounds by adjusting the stepsize dynamically on each trajectory to prevent spurious growth that can lead to loss of convergence if it occurs with sufficiently high probability. In this article we demonstrate the use of a jump-adapted mesh that incorporates jump times into the adaptive time-stepping strategy. We prove that any adaptive scheme satisfying a particular mean-square consistency bound for a nonlinear SDE in the non-jump case may be extended to a strongly convergent scheme in the Poisson jump case where jump and diffusion perturbations are mutually independent and the jump coefficient satisfies a global Lipschitz condition.

The joint modeling of multiple longitudinal biomarkers together with a time-to-event outcome is a challenging modeling task of continued scientific interest. In particular, the computational complexity of high dimensional (generalized) mixed effects models often restricts the flexibility of shared parameter joint models, even when the subject-specific marker trajectories follow highly nonlinear courses. We propose a parsimonious multivariate functional principal components representation of the shared random effects. This allows better scalability, as the dimension of the random effects does not directly increase with the number of markers, only with the chosen number of principal component basis functions used in the approximation of the random effects. The functional principal component representation additionally allows to estimate highly flexible subject-specific random trajectories without parametric assumptions. The modeled trajectories can thus be distinctly different for each biomarker. We build on the framework of flexible Bayesian additive joint models implemented in the R-package 'bamlss', which also supports estimation of nonlinear covariate effects via Bayesian P-splines. The flexible yet parsimonious functional principal components basis used in the estimation of the joint model is first estimated in a preliminary step. We validate our approach in a simulation study and illustrate its advantages by analyzing a study on primary biliary cholangitis.

We study the iterative methods for large moment systems derived from the linearized Boltzmann equation. By Fourier analysis, it is shown that the direct application of the block symmetric Gauss-Seidel (BSGS) method has slower convergence for smaller Knudsen numbers. Better convergence rates for dense flows are then achieved by coupling the BSGS method with the micro-macro decomposition, which treats the moment equations as a coupled system with a microscopic part and a macroscopic part. Since the macroscopic part contains only a small number of equations, it can be solved accurately during the iteration with a relatively small computational cost, which accelerates the overall iteration. The method is further generalized to the multiscale decomposition which splits the moment system into many subsystems with different orders of magnitude. Both one- and two-dimensional numerical tests are carried out to examine the performances of these methods. Possible issues regarding the efficiency and convergence are discussed in the conclusion.

This paper considers the problem of robust iterative Bayesian smoothing in nonlinear state-space models with additive noise using Gaussian approximations. Iterative methods are known to improve smoothed estimates but are not guaranteed to converge, motivating the development of more robust versions of the algorithms. The aim of this article is to present Levenberg-Marquardt (LM) and line-search extensions of the classical iterated extended Kalman smoother (IEKS) as well as the iterated posterior linearisation smoother (IPLS). The IEKS has previously been shown to be equivalent to the Gauss-Newton (GN) method. We derive a similar GN interpretation for the IPLS. Furthermore, we show that an LM extension for both iterative methods can be achieved with a simple modification of the smoothing iterations, enabling algorithms with efficient implementations. Our numerical experiments show the importance of robust methods, in particular for the IEKS-based smoothers. The computationally expensive IPLS-based smoothers are naturally robust but can still benefit from further regularisation.

Deep learning methods are emerging as popular computational tools for solving forward and inverse problems in traffic flow. In this paper, we study a neural operator framework for learning solutions to nonlinear hyperbolic partial differential equations with applications in macroscopic traffic flow models. In this framework, an operator is trained to map heterogeneous and sparse traffic input data to the complete macroscopic traffic state in a supervised learning setting. We chose a physics-informed Fourier neural operator ($\pi$-FNO) as the operator, where an additional physics loss based on a discrete conservation law regularizes the problem during training to improve the shock predictions. We also propose to use training data generated from random piecewise constant input data to systematically capture the shock and rarefied solutions. From experiments using the LWR traffic flow model, we found superior accuracy in predicting the density dynamics of a ring-road network and urban signalized road. We also found that the operator can be trained using simple traffic density dynamics, e.g., consisting of $2-3$ vehicle queues and $1-2$ traffic signal cycles, and it can predict density dynamics for heterogeneous vehicle queue distributions and multiple traffic signal cycles $(\geq 2)$ with an acceptable error. The extrapolation error grew sub-linearly with input complexity for a proper choice of the model architecture and training data. Adding a physics regularizer aided in learning long-term traffic density dynamics, especially for problems with periodic boundary data.

北京阿比特科技有限公司