亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper introduces a prognostic method called FLASH that addresses the problem of joint modelling of longitudinal data and censored durations when a large number of both longitudinal and time-independent features are available. In the literature, standard joint models are either of the shared random effect or joint latent class type. Combining ideas from both worlds and using appropriate regularisation techniques, we define a new model with the ability to automatically identify significant prognostic longitudinal features in a high-dimensional context, which is of increasing importance in many areas such as personalised medicine or churn prediction. We develop an estimation methodology based on the EM algorithm and provide an efficient implementation. The statistical performance of the method is demonstrated both in extensive Monte Carlo simulation studies and on publicly available real-world datasets. Our method significantly outperforms the state-of-the-art joint models in predicting the latent class membership probability in terms of the C-index in a so-called ``real-time'' prediction setting, with a computational speed that is orders of magnitude faster than competing methods. In addition, our model automatically identifies significant features that are relevant from a practical perspective, making it interpretable.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · 近似 · 可行 · 線性的 · 離散化 ·
2024 年 9 月 30 日

We propose a numerical algorithm for the computation of multi-marginal optimal transport (MMOT) problems involving general probability measures that are not necessarily discrete. By developing a relaxation scheme in which marginal constraints are replaced by finitely many linear constraints and by proving a specifically tailored duality result for this setting, we approximate the MMOT problem by a linear semi-infinite optimization problem. Moreover, we are able to recover a feasible and approximately optimal solution of the MMOT problem, and its sub-optimality can be controlled to be arbitrarily close to 0 under mild conditions. The developed relaxation scheme leads to a numerical algorithm which can compute a feasible approximate optimizer of the MMOT problem whose theoretical sub-optimality can be chosen to be arbitrarily small. Besides the approximate optimizer, the algorithm is also able to compute both an upper bound and a lower bound for the optimal value of the MMOT problem. The difference between the computed bounds provides an explicit sub-optimality bound for the computed approximate optimizer. We demonstrate the proposed algorithm in three numerical experiments involving an MMOT problem that stems from fluid dynamics, the Wasserstein barycenter problem, and a large-scale MMOT problem with 100 marginals. We observe that our algorithm is capable of computing high-quality solutions of these MMOT problems and the computed sub-optimality bounds are much less conservative than their theoretical upper bounds in all the experiments.

The aim of this article is to present a hybrid finite element/finite difference method which is used for reconstructions of electromagnetic properties within a realistic breast phantom. This is done by studying the mentioned properties' (electric permittivity and conductivity in this case) representing coefficients in a constellation of Maxwell's equations. This information is valuable since these coefficient can reveal types of tissues within the breast, and in applications could be used to detect shapes and locations of tumours. Because of the ill-posed nature of this coefficient inverse problem, we approach it as an optimization problem by introducing the corresponding Tikhonov functional and in turn Lagrangian. These are then minimized by utilizing an interplay between finite element and finite difference methods for solutions of direct and adjoint problems, and thereafter by applying a conjugate gradient method to an adaptively refined mesh.

It is often said that the fundamental problem of causal inference is a missing data problem -- the comparison of responses to two hypothetical treatment assignments is made difficult because for every experimental unit only one potential response is observed. In this paper, we consider the implications of the converse view: that missing data problems are a form of causal inference. We make explicit how the missing data problem of recovering the complete data law from the observed law can be viewed as identification of a joint distribution over counterfactual variables corresponding to values had we (possibly contrary to fact) been able to observe them. Drawing analogies with causal inference, we show how identification assumptions in missing data can be encoded in terms of graphical models defined over counterfactual and observed variables. We review recent results in missing data identification from this viewpoint. In doing so, we note interesting similarities and differences between missing data and causal identification theories.

We formulate a uniform tail bound for empirical processes indexed by a class of functions, in terms of the individual deviations of the functions rather than the worst-case deviation in the considered class. The tail bound is established by introducing an initial ``deflation'' step to the standard generic chaining argument. The resulting tail bound is the sum of the complexity of the ``deflated function class'' in terms of a generalization of Talagrand's $\gamma$ functional, and the deviation of the function instance, both of which are formulated based on the natural seminorm induced by the corresponding Cram\'{e}r functions. Leveraging another less demanding natural seminorm, we also show similar bounds, though with implicit dependence on the sample size, in the more general case where finite exponential moments cannot be assumed. We also provide approximations of the tail bounds in terms of the more prevalent Orlicz norms or their ``incomplete'' versions under suitable moment conditions.

In this paper, we apply quasi-Monte Carlo (QMC) methods with an initial preintegration step to estimate cumulative distribution functions and probability density functions in uncertainty quantification (UQ). The distribution and density functions correspond to a quantity of interest involving the solution to an elliptic partial differential equation (PDE) with a lognormally distributed coefficient and a normally distributed source term. There is extensive previous work on using QMC to compute expected values in UQ, which have proven very successful in tackling a range of different PDE problems. However, the use of QMC for density estimation applied to UQ problems will be explored here for the first time. Density estimation presents a more difficult challenge compared to computing the expected value due to discontinuities present in the integral formulations of both the distribution and density. Our strategy is to use preintegration to eliminate the discontinuity by integrating out a carefully selected random parameter, so that QMC can be used to approximate the remaining integral. First, we establish regularity results for the PDE quantity of interest that are required for smoothing by preintegration to be effective. We then show that an $N$-point lattice rule can be constructed for the integrands corresponding to the distribution and density, such that after preintegration the QMC error is of order $\mathcal{O}(N^{-1+\epsilon})$ for arbitrarily small $\epsilon>0$. This is the same rate achieved for computing the expected value of the quantity of interest. Numerical results are presented to reaffirm our theory.

The consensus problem in distributed computing involves a network of agents aiming to compute the average of their initial vectors through local communication, represented by an undirected graph. This paper focuses on the studying of this problem using an average-case analysis approach, particularly over regular graphs. Traditional algorithms for solving the consensus problem often rely on worst-case performance evaluation scenarios, which may not reflect typical performance in real-world applications. Instead, we apply average-case analysis, focusing on the expected spectral distribution of eigenvalues to obtain a more realistic view of performance. Key contributions include deriving the optimal method for consensus on regular graphs, showing its relation to the Heavy Ball method, analyzing its asymptotic convergence rate, and comparing it to various first-order methods through numerical experiments.

In this work, we develop Crank-Nicolson-type iterative decoupled algorithms for a three-field formulation of Biot's consolidation model using total pressure. We begin by constructing an equivalent fully implicit coupled algorithm using the standard Crank-Nicolson method for the three-field formulation of Biot's model. Employing an iterative decoupled scheme to decompose the resulting coupled system, we derive two distinctive forms of Crank-Nicolson-type iterative decoupled algorithms based on the order of temporal computation and iteration: a time-stepping iterative decoupled algorithm and a global-in-time iterative decoupled algorithm. Notably, the proposed global-in-time algorithm supports a partially parallel-in-time feature. Capitalizing on the convergence properties of the iterative decoupled scheme, both algorithms exhibit second-order time accuracy and unconditional stability. Through numerical experiments, we validate theoretical predictions and demonstrate the effectiveness and efficiency of these novel approaches.

We study the decidability and complexity of non-cooperative rational synthesis problem (abbreviated as NCRSP) for some classes of probabilistic strategies. We show that NCRSP for stationary strategies and Muller objectives is in 3-EXPTIME, and if we restrict the strategies of environment players to be positional, NCRSP becomes NEXPSPACE solvable. On the other hand, NCRSP_>, which is a variant of NCRSP, is shown to be undecidable even for pure finite-state strategies and terminal reachability objectives. Finally, we show that NCRSP becomes EXPTIME solvable if we restrict the memory of a strategy to be the most recently visited t vertices where t is linear in the size of the game.

This paper deals with nonlinear mechanics of an elevator brake system subjected to uncertainties. A deterministic model that relates the braking force with uncertain parameters is deduced from mechanical equilibrium conditions. In order to take into account parameters variabilities, a parametric probabilistic approach is employed. In this stochastic formalism, the uncertain parameters are modeled as random variables, with distributions specified by the maximum entropy principle. The uncertainties are propagated by the Monte Carlo method, which provides a detailed statistical characterization of the response. This work still considers the optimum design of the brake system, formulating and solving nonlinear optimization problems, with and without the uncertainties effects.

Hashing has been widely used in approximate nearest search for large-scale database retrieval for its computation and storage efficiency. Deep hashing, which devises convolutional neural network architecture to exploit and extract the semantic information or feature of images, has received increasing attention recently. In this survey, several deep supervised hashing methods for image retrieval are evaluated and I conclude three main different directions for deep supervised hashing methods. Several comments are made at the end. Moreover, to break through the bottleneck of the existing hashing methods, I propose a Shadow Recurrent Hashing(SRH) method as a try. Specifically, I devise a CNN architecture to extract the semantic features of images and design a loss function to encourage similar images projected close. To this end, I propose a concept: shadow of the CNN output. During optimization process, the CNN output and its shadow are guiding each other so as to achieve the optimal solution as much as possible. Several experiments on dataset CIFAR-10 show the satisfying performance of SRH.

北京阿比特科技有限公司