亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Accelerated degradation tests are used to provide accurate estimation of lifetime properties of highly reliable products within a relatively short testing time. There data from particular tests at high levels of stress (e.\,g.\ temperature, voltage, or vibration) are extrapolated, through a physically meaningful model, to obtain estimates of lifetime quantiles under normal use conditions. In this work, we consider repeated measures accelerated degradation tests with multiple stress variables, where the degradation paths are assumed to follow a linear mixed effects model which is quite common in settings when repeated measures are made. We derive optimal experimental designs for minimizing the asymptotic variance for estimating the median failure time under normal use conditions when the time points for measurements are either fixed in advance or are also to be optimized.

相關內容

Multiuser multiple-input multiple-output wireless communications systems have the potential to satisfy the performance requirements of fifth-generation and future wireless networks. In this context, cell-free (CF) systems, where the antennas are distributed over the area of interest, have attracted attention because of their potential to enhance the overall efficiency and throughput performance when compared to traditional networks based on cells. However, the performance of CF systems is degraded by imperfect channel state information (CSI). To mitigate the detrimental effects of imperfect CSI, we employ rate splitting (RS) - a multiple-access scheme. The RS approach divides the messages of the users into two separate common and private portions so that interference is managed robustly. Unlike prior works, where the impact of RS in CF systems remains unexamined, we propose a CF architecture that employs RS with linear precoders to address deteriorated CSI. We derive closed-form expressions to compute the sum-rate performance of the proposed RS-CF architecture. Our numerical experiments show that our RS-CF system outperforms existing systems in terms of sum-rate, obtaining up to $10$% higher gain.

We consider a fully digital massive multiple-input multiple-output architecture with low-resolution analog-to-digital/digital-to-analog converters (ADCs/DACs) at the base station (BS) and analyze the performance trade-off between the number of BS antennas, the resolution of the ADCs/DACs, and the bandwidth. Assuming a hardware power consumption constraint, we determine the relationship between these design parameters by using a realistic model for the power consumption of the ADCs/DACs and the radio frequency chains. Considering uplink pilot-aided channel estimation, we build on the Bussgang decomposition to derive tractable expressions for uplink and downlink ergodic achievable sum rates. Numerical results show that the ergodic performance is boosted when many BS antennas with very low resolution (i.e., 2 to 3 bits) are adopted in both the uplink and the downlink.

Agile quadrotor flight in challenging environments has the potential to revolutionize shipping, transportation, and search and rescue applications. Nonlinear model predictive control (NMPC) has recently shown promising results for agile quadrotor control, but relies on highly accurate models for maximum performance. Hence, model uncertainties in the form of unmodeled complex aerodynamic effects, varying payloads and parameter mismatch will degrade overall system performance. In this paper, we propose L1-NMPC, a novel hybrid adaptive NMPC to learn model uncertainties online and immediately compensate for them, drastically improving performance over the non-adaptive baseline with minimal computational overhead. Our proposed architecture generalizes to many different environments from which we evaluate wind, unknown payloads, and highly agile flight conditions. The proposed method demonstrates immense flexibility and robustness, with more than 90% tracking error reduction over non-adaptive NMPC under large unknown disturbances and without any gain tuning. In addition, the same controller with identical gains can accurately fly highly agile racing trajectories exhibiting top speeds of 70 km/h, offering tracking performance improvements of around 50% relative to the non-adaptive NMPC baseline.

Repeatedly solving the parameterized optimal mass transport (pOMT) problem is a frequent task in applications such as image registration and adaptive grid generation. It is thus critical to develop a highly efficient reduced solver that is equally accurate as the full order model. In this paper, we propose such a machine learning-like method for pOMT by adapting a new reduced basis (RB) technique specifically designed for nonlinear equations, the reduced residual reduced over-collocation (R2-ROC) approach, to the parameterized Monge-Amp$\grave{\rm e}$re equation. It builds on top of a narrow-stencil finite different method (FDM), a so-called truth solver, which we propose in this paper for the Monge-Amp$\grave{\rm e}$re equation with a transport boundary. Together with the R2-ROC approach, it allows us to handle the strong and unique nonlinearity pertaining to the Monge-Amp$\grave{\rm e}$re equation achieving online efficiency without resorting to any direct approximation of the nonlinearity. Several challenging numerical tests demonstrate the accuracy and high efficiency of our method for solving the Monge-Amp$\grave{\rm e}$re equation with various parametric boundary conditions.

Exact null distributions of goodness-of-fit test statistics are generally challenging to obtain in tractable forms. Practitioners are therefore usually obliged to rely on asymptotic null distributions or Monte Carlo methods, either in the form of a lookup table or carried out on demand, to apply a goodness-of-fit test. Stephens (1970) provided remarkable simple and useful transformations of several classic goodness-of-fit test statistics that stabilized their exact-$n$ critical values for varying sample sizes $n$. However, detail on the accuracy of these and subsequent transformations in yielding exact $p$-values, or even deep understanding on the derivation of several transformations, is still scarce nowadays. We illuminate and automatize, using modern tools, the latter stabilization approach to (i) expand its scope of applicability and (ii) yield semi-continuous exact $p$-values, as opposed to exact critical values for fixed significance levels. We show improvements on the stabilization accuracy of the exact null distributions of the Kolmogorov-Smirnov, Cram\'er-von Mises, Anderson-Darling, Kuiper, and Watson test statistics. In addition, we provide a parameter-dependent exact-$n$ stabilization for several novel statistics for testing uniformity on the hypersphere of arbitrary dimension. A data application in astronomy illustrates the benefits of the advocated stabilization for quickly analyzing small-to-moderate sequentially-measured samples.

The time-dependent quadratic minimization (TDQM) problem appears in many applications and research projects. It has been reported that the zeroing neural network (ZNN) models can effectively solve the TDQM problem. However, the convergent and robust performance of the existing ZNN models are restricted for lack of a joint-action mechanism of adaptive coefficient and integration enhanced term. Consequently, the residual-based adaption coefficient zeroing neural network (RACZNN) model with integration term is proposed in this paper for solving the TDQM problem. The adaptive coefficient is proposed to improve the performance of convergence and the integration term is embedded to ensure the RACZNN model can maintain reliable robustness while perturbed by variant measurement noises. Compared with the state-of-the-art models, the proposed RACZNN model owns faster convergence and more reliable robustness. Then, theorems are provided to prove the convergence of the RACZNN model. Finally, corresponding quantitative numerical experiments are designed and performed in this paper to verify the performance of the proposed RACZNN model.

A mega-constellation of low-altitude earth orbit (LEO) satellites (SATs) are envisaged to provide a global coverage SAT network in beyond fifth-generation (5G) cellular systems. LEO SAT networks exhibit extremely long link distances of many users under time-varying SAT network topology. This makes existing multiple access protocols, such as random access channel (RACH) based cellular protocol designed for fixed terrestrial network topology, ill-suited. To overcome this issue, in this paper, we propose a novel grant-free random access solution for LEO SAT networks, dubbed emergent random access channel protocol (eRACH). In stark contrast to existing model-based and standardized protocols, eRACH is a model-free approach that emerges through interaction with the non-stationary network environment, using multi-agent deep reinforcement learning (MADRL). Furthermore, by exploiting known SAT orbiting patterns, eRACH does not require central coordination or additional communication across users, while training convergence is stabilized through the regular orbiting patterns. Compared to RACH, we show from various simulations that our proposed eRACH yields 54.6% higher average network throughput with around two times lower average access delay while achieving 0.989 Jain's fairness index.

We consider a randomized controlled trial between two groups. The objective is to identify a population with characteristics such that the test therapy is more effective than the control therapy. Such a population is called a subgroup. This identification can be made by estimating the treatment effect and identifying interactions between treatments and covariates. To date, many methods have been proposed to identify subgroups for a single outcome. There are also multiple outcomes, but they are difficult to interpret and cannot be applied to outcomes other than continuous values. In this paper, we propose a multivariate regression method that introduces latent variables to estimate the treatment effect on multiple outcomes simultaneously. The proposed method introduces latent variables and adds Lasso sparsity constraints to the estimated loadings to facilitate the interpretation of the relationship between outcomes and covariates. The framework of the generalized linear model makes it applicable to various types of outcomes. Interpretation of subgroups is made by visualizing treatment effects and latent variables. This allows us to identify subgroups with characteristics that make the test therapy more effective for multiple outcomes. Simulation and real data examples demonstrate the effectiveness of the proposed method.

Deep neural network models used for medical image segmentation are large because they are trained with high-resolution three-dimensional (3D) images. Graphics processing units (GPUs) are widely used to accelerate the trainings. However, the memory on a GPU is not large enough to train the models. A popular approach to tackling this problem is patch-based method, which divides a large image into small patches and trains the models with these small patches. However, this method would degrade the segmentation quality if a target object spans multiple patches. In this paper, we propose a novel approach for 3D medical image segmentation that utilizes the data-swapping, which swaps out intermediate data from GPU memory to CPU memory to enlarge the effective GPU memory size, for training high-resolution 3D medical images without patching. We carefully tuned parameters in the data-swapping method to obtain the best training performance for 3D U-Net, a widely used deep neural network model for medical image segmentation. We applied our tuning to train 3D U-Net with full-size images of 192 x 192 x 192 voxels in brain tumor dataset. As a result, communication overhead, which is the most important issue, was reduced by 17.1%. Compared with the patch-based method for patches of 128 x 128 x 128 voxels, our training for full-size images achieved improvement on the mean Dice score by 4.48% and 5.32 % for detecting whole tumor sub-region and tumor core sub-region, respectively. The total training time was reduced from 164 hours to 47 hours, resulting in 3.53 times of acceleration.

We consider the task of learning the parameters of a {\em single} component of a mixture model, for the case when we are given {\em side information} about that component, we call this the "search problem" in mixture models. We would like to solve this with computational and sample complexity lower than solving the overall original problem, where one learns parameters of all components. Our main contributions are the development of a simple but general model for the notion of side information, and a corresponding simple matrix-based algorithm for solving the search problem in this general setting. We then specialize this model and algorithm to four common scenarios: Gaussian mixture models, LDA topic models, subspace clustering, and mixed linear regression. For each one of these we show that if (and only if) the side information is informative, we obtain parameter estimates with greater accuracy, and also improved computation complexity than existing moment based mixture model algorithms (e.g. tensor methods). We also illustrate several natural ways one can obtain such side information, for specific problem instances. Our experiments on real data sets (NY Times, Yelp, BSDS500) further demonstrate the practicality of our algorithms showing significant improvement in runtime and accuracy.

北京阿比特科技有限公司