亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider a formal statistical design that allows simultaneous enrollment of a main cohort and a backfill cohort of patients in a dose-finding trial. The goal is to accumulate more information at various doses to facilitate dose optimization. The proposed design, called Bi3+3, combines the simple dose-escalation algorithm in the i3+3 design and a model-based inference under the framework of probability of decisions (POD), both previously published. As a result, Bi3+3 provides a simple algorithm for backfilling patients to lower doses in a dose-finding trial once these doses exhibit safety profile in patients. The POD framework allows dosing decisions to be made when some backfill patients are still being followed with incomplete toxicity outcomes, thereby potentially expediting the clinical trial. At the end of the trial, Bi3+3 uses both toxicity and efficacy outcomes to estimate an optimal biological dose (OBD). The proposed inference is based on a dose-response model that takes into account either a monotone or plateau dose-efficacy relationship, which are frequently encountered in modern oncology drug development. Simulation studies show promising operating characteristics of the Bi3+3 design in comparison to existing designs.

相關內容

In oncology, phase II studies are crucial for clinical development plans, as they identify potent agents with sufficient activity to continue development in the subsequent phase III trials. Traditionally, phase II studies are single-arm studies, with an endpoint of treatment efficacy in the short-term. However, drug safety is also an important consideration. Thus, in the context of such multiple outcome design, predictive probabilities-based Bayesian monitoring strategies have been developed to assess if a clinical trial will show a conclusive result at the planned end of the study. In this paper, we propose a new simple index vector for summarizing the results that cannot be captured by existing strategies. Specifically, for each interim monitoring time point, we calculate the Bayesian predictive probability using our new index, and use it to assign a go/no-go decision. Finally, simulation studies are performed to evaluate the operating characteristics of the design. This analysis demonstrates that the proposed method makes appropriate interim go/no-go decisions.

The Internet of Things (IoT) has boomed in recent years, with an ever-growing number of connected devices and a corresponding exponential increase in network traffic. As a result, IoT devices have become potential witnesses of the surrounding environment and people living in it, creating a vast new source of forensic evidence. To address this need, a new field called IoT Forensics has emerged. In this paper, we present \textit{CSI Sniffer}, a tool that integrates the collection and management of Channel State Information (CSI) in Wi-Fi Access Points. CSI is a physical layer indicator that enables human sensing, including occupancy monitoring and activity recognition. After a description of the tool architecture and implementation, we demonstrate its capabilities through two application scenarios that use binary classification techniques to classify user behavior based on CSI features extracted from IoT traffic. Our results show that the proposed tool can enhance the capabilities of forensic investigations by providing additional sources of evidence. Wi-Fi Access Points integrated with \textit{CSI Sniffer} can be used by ISP or network managers to facilitate the collection of information from IoT devices and the surrounding environment. We conclude the work by analyzing the storage requirements of CSI sample collection and discussing the impact of lossy compression techniques on classification performance.

The discovery of individual objectives in collective behavior of complex dynamical systems such as fish schools and bacteria colonies is a long-standing challenge. Inverse reinforcement learning is a potent approach for addressing this challenge but its applicability to dynamical systems, involving continuous state-action spaces and multiple interacting agents, has been limited. In this study, we tackle this challenge by introducing an off-policy inverse multi-agent reinforcement learning algorithm (IMARL). Our approach combines the ReF-ER techniques with guided cost learning. By leveraging demonstrations, our algorithm automatically uncovers the reward function and learns an effective policy for the agents. Through extensive experimentation, we demonstrate that the proposed policy captures the behavior observed in the provided data, and achieves promising results across problem domains including single agent models in the OpenAI gym and multi-agent models of schooling behavior. The present study shows that the proposed IMARL algorithm is a significant step towards understanding collective dynamics from the perspective of its constituents, and showcases its value as a tool for studying complex physical systems exhibiting collective behaviour.

In the realm of urban transportation, metro systems serve as crucial and sustainable means of public transit. However, their substantial energy consumption poses a challenge to the goal of sustainability. Disturbances such as delays and passenger flow changes can further exacerbate this issue by negatively affecting energy efficiency in metro systems. To tackle this problem, we propose a policy-based reinforcement learning approach that reschedules the metro timetable and optimizes energy efficiency in metro systems under disturbances by adjusting the dwell time and cruise speed of trains. Our experiments conducted in a simulation environment demonstrate the superiority of our method over baseline methods, achieving a traction energy consumption reduction of up to 10.9% and an increase in regenerative braking energy utilization of up to 47.9%. This study provides an effective solution to the energy-saving problem of urban rail transit.

Machine learning has recently made significant strides in reducing design cycle time for complex products. Ship design, which currently involves years long cycles and small batch production, could greatly benefit from these advancements. By developing a machine learning tool for ship design that learns from the design of many different types of ships, tradeoffs in ship design could be identified and optimized. However, the lack of publicly available ship design datasets currently limits the potential for leveraging machine learning in generalized ship design. To address this gap, this paper presents a large dataset of thirty thousand ship hulls, each with design and functional performance information, including parameterization, mesh, point cloud, and image representations, as well as thirty two hydrodynamic drag measures under different operating conditions. The dataset is structured to allow human input and is also designed for computational methods. Additionally, the paper introduces a set of twelve ship hulls from publicly available CAD repositories to showcase the proposed parameterizations ability to accurately reconstruct existing hulls. A surrogate model was developed to predict the thirty two wave drag coefficients, which was then implemented in a genetic algorithm case study to reduce the total drag of a hull by sixty percent while maintaining the shape of the hulls cross section and the length of the parallel midbody. Our work provides a comprehensive dataset and application examples for other researchers to use in advancing data driven ship design.

In this paper, we present a machine learning-based data generator framework tailored to aid researchers who utilize simulations to examine various physical systems or processes. High computational costs and the resulting limited data often pose significant challenges to gaining insights into these systems or processes. Our approach involves a two-step process: initially, we train a supervised predictive model using a limited simulated dataset to predict simulation outcomes. Subsequently, a reinforcement learning agent is trained to generate accurate, simulation-like data by leveraging the supervised model. With this framework, researchers can generate more accurate data and know the outcomes without running high computational simulations, which enables them to explore the parameter space more efficiently and gain deeper insights into physical systems or processes. We demonstrate the effectiveness of the proposed framework by applying it to two case studies, one focusing on earthquake rupture physics and the other on new material development.

We present a practical guide for the analysis of regression discontinuity (RD) designs in biomedical contexts. We begin by introducing key concepts, assumptions, and estimands within both the continuity-based framework and the local randomization framework. We then discuss modern estimation and inference methods within both frameworks, including approaches for bandwidth or local neighborhood selection, optimal treatment effect point estimation, and robust bias-corrected inference methods for uncertainty quantification. We also overview empirical falsification tests that can be used to support key assumptions. Our discussion focuses on two particular features that are relevant in biomedical research: (i) fuzzy RD designs, which often arise when therapeutic treatments are based on clinical guidelines but patients with scores near the cutoff are treated contrary to the assignment rule; and (ii) RD designs with discrete scores, which are ubiquitous in biomedical applications. We illustrate our discussion with three empirical applications: the effect of CD4 guidelines for anti-retroviral therapy on retention of HIV patients in South Africa, the effect of genetic guidelines for chemotherapy on breast cancer recurrence in the United States, and the effects of age-based patient cost-sharing on healthcare utilization in Taiwan. We provide replication materials employing publicly available statistical software in Python, R and Stata, offering researchers all necessary tools to conduct an RD analysis.

Cloud computing has become the de facto paradigm for delivering software to system users, with organizations and enterprises of all sizes making use of cloud services in some way. On the surface, adopting the cloud appears to be a very efficient approach for offloading concerns such as infrastructure management, logistics, and most importantly for this work, energy consumption and consequent carbon emissions to the cloud service provider. However, this is in many ways not an appropriately accountable solution to managing the contribution of the ICT sector to global emissions. To this effect, in this paper we report on an exploratory case study done in collaboration with a Software as a Service provider operating globally in the telecommunications sector. The study reckons with the service provider using multi-tenant, that is, shared, off-premises data centers for hosting their private cloud infrastructure towards developing a fair model of allocating operational emissions among the service tenants -- customer companies with many distinct users. The developed emissions model has to account for allocating in an appropriate manner the generated emissions between the tenants of the software provider services, and among the different tenants of the same data center. A carbon footprint report generator is developed building on the proposed model which is, in turn, used to present sustainability reports to involved stakeholders for evaluation purposes. Our results show that the model is perceived as transparent, informative, and fair, with requested improvements focusing mainly on the generated reports and the information contained therein.

While Reinforcement Learning (RL) achieves tremendous success in sequential decision-making problems of many domains, it still faces key challenges of data inefficiency and the lack of interpretability. Interestingly, many researchers have leveraged insights from the causality literature recently, bringing forth flourishing works to unify the merits of causality and address well the challenges from RL. As such, it is of great necessity and significance to collate these Causal Reinforcement Learning (CRL) works, offer a review of CRL methods, and investigate the potential functionality from causality toward RL. In particular, we divide existing CRL approaches into two categories according to whether their causality-based information is given in advance or not. We further analyze each category in terms of the formalization of different models, ranging from the Markov Decision Process (MDP), Partially Observed Markov Decision Process (POMDP), Multi-Arm Bandits (MAB), and Dynamic Treatment Regime (DTR). Moreover, we summarize the evaluation matrices and open sources while we discuss emerging applications, along with promising prospects for the future development of CRL.

The Evidential regression network (ENet) estimates a continuous target and its predictive uncertainty without costly Bayesian model averaging. However, it is possible that the target is inaccurately predicted due to the gradient shrinkage problem of the original loss function of the ENet, the negative log marginal likelihood (NLL) loss. In this paper, the objective is to improve the prediction accuracy of the ENet while maintaining its efficient uncertainty estimation by resolving the gradient shrinkage problem. A multi-task learning (MTL) framework, referred to as MT-ENet, is proposed to accomplish this aim. In the MTL, we define the Lipschitz modified mean squared error (MSE) loss function as another loss and add it to the existing NLL loss. The Lipschitz modified MSE loss is designed to mitigate the gradient conflict with the NLL loss by dynamically adjusting its Lipschitz constant. By doing so, the Lipschitz MSE loss does not disturb the uncertainty estimation of the NLL loss. The MT-ENet enhances the predictive accuracy of the ENet without losing uncertainty estimation capability on the synthetic dataset and real-world benchmarks, including drug-target affinity (DTA) regression. Furthermore, the MT-ENet shows remarkable calibration and out-of-distribution detection capability on the DTA benchmarks.

北京阿比特科技有限公司