亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this study, we introduce an optimization framework aimed at enhancing the efficiency of motion priority design in scenarios involving automated and teleoperated robots within an industrial recovery context. The escalating utilization of industrial robots at manufacturing sites has been instrumental in mitigating human workload. Nevertheless, the challenge persists in achieving effective human-robot collaboration/cooperation where human workers and robots share a workspace for collaborative tasks. In the event of an industrial robot encountering a failure, it necessitates the suspension of the corresponding factory cell for safe recovery. Given the limited capacity of pre-programmed robots to rectify such failures, human intervention becomes imperative, requiring entry into the robot workspace to address the dropped object while the robot system is halted. This non-continuous manufacturing process results in productivity loss. Robotic teleoperation has emerged as a promising technology enabling human workers to undertake high-risk tasks remotely and safely. Our study advocates for the incorporation of robotic teleoperation in the recovery process during manufacturing failure scenarios, which is referred to as "Cooperative Tele-Recovery". Our proposed approach involves the formulation of priority rules designed to facilitate collision avoidance between manufacturing and recovery robots. This, in turn, ensures a continuous manufacturing process with minimal production loss within a configurable risk limitation. We present a comprehensive motion priority optimization framework, encompassing an HRC simulator-based priority optimization and a cooperative multi-robot controller, to identify optimal parameters for the priority function. The framework dynamically adjusts the allocation of motion priorities for manufacturing and recovery robots while adhering to predefined risk limitations.

相關內容

Automator是蘋果公司為他們的Mac OS X系統開發的一款軟件。 只要通過點擊拖拽鼠標等操作就可以將一系列動作組合成一個工作流,從而幫助你自動的(可重復的)完成一些復雜的工作。Automator還能橫跨很多不同種類的程序,包括:查找器、Safari網絡瀏覽器、iCal、地址簿或者其他的一些程序。它還能和一些第三方的程序一起工作,如微軟的Office、Adobe公司的Photoshop或者Pixelmator等。

In this work, we proposed a novel inferential procedure assisted by machine learning based adjustment for randomized control trials. The method was developed under the Rosenbaum's framework of exact tests in randomized experiments with covariate adjustments. Through extensive simulation experiments, we showed the proposed method can robustly control the type I error and can boost the inference efficiency for a randomized controlled trial (RCT). This advantage was further demonstrated in a real world example. The simplicity and robustness of the proposed method makes it a competitive candidate as a routine inference procedure for RCTs, especially when the number of baseline covariates is large, and when nonlinear association or interaction among covariates is expected. Its application may remarkably reduce the required sample size and cost of RCTs, such as phase III clinical trials.

In this paper, we address the intricate challenge of gaze vector prediction, a pivotal task with applications ranging from human-computer interaction to driver monitoring systems. Our innovative approach is designed for the demanding setting of extremely low-light conditions, leveraging a novel temporal event encoding scheme, and a dedicated neural network architecture. The temporal encoding method seamlessly integrates Dynamic Vision Sensor (DVS) events with grayscale guide frames, generating consecutively encoded images for input into our neural network. This unique solution not only captures diverse gaze responses from participants within the active age group but also introduces a curated dataset tailored for low-light conditions. The encoded temporal frames paired with our network showcase impressive spatial localization and reliable gaze direction in their predictions. Achieving a remarkable 100-pixel accuracy of 100%, our research underscores the potency of our neural network to work with temporally consecutive encoded images for precise gaze vector predictions in challenging low-light videos, contributing to the advancement of gaze prediction technologies.

Due to the inability to interact with the environment, offline reinforcement learning (RL) methods face the challenge of estimating the Out-of-Distribution (OOD) points. Existing methods for addressing this issue either control policy to exclude the OOD action or make the $Q$ function pessimistic. However, these methods can be overly conservative or fail to identify OOD areas accurately. To overcome this problem, we propose a Constrained Policy optimization with Explicit Behavior density (CPED) method that utilizes a flow-GAN model to explicitly estimate the density of behavior policy. By estimating the explicit density, CPED can accurately identify the safe region and enable optimization within the region, resulting in less conservative learning policies. We further provide theoretical results for both the flow-GAN estimator and performance guarantee for CPED by showing that CPED can find the optimal $Q$-function value. Empirically, CPED outperforms existing alternatives on various standard offline reinforcement learning tasks, yielding higher expected returns.

This work presents an information-theoretic perspective to group fairness trade-offs in federated learning (FL) with respect to sensitive attributes, such as gender, race, etc. Existing works often focus on either $\textit{global fairness}$ (overall disparity of the model across all clients) or $\textit{local fairness}$ (disparity of the model at each client), without always considering their trade-offs. There is a lack of understanding regarding the interplay between global and local fairness in FL, particularly under data heterogeneity, and if and when one implies the other. To address this gap, we leverage a body of work in information theory called partial information decomposition (PID), which first identifies three sources of unfairness in FL, namely, $\textit{Unique Disparity}$, $\textit{Redundant Disparity}$, and $\textit{Masked Disparity}$. We demonstrate how these three disparities contribute to global and local fairness using canonical examples. This decomposition helps us derive fundamental limits on the trade-off between global and local fairness, highlighting where they agree or disagree. We introduce the $\textit{Accuracy and Global-Local Fairness Optimality Problem (AGLFOP)}$, a convex optimization that defines the theoretical limits of accuracy and fairness trade-offs, identifying the best possible performance any FL strategy can attain given a dataset and client distribution. We also present experimental results on synthetic datasets and the ADULT dataset to support our theoretical findings.

OOD detection has become more pertinent with advances in network design and increased task complexity. Identifying which parts of the data a given network is misclassifying has become as valuable as the network's overall performance. We can compress the model with quantization, but it suffers minor performance loss. The loss of performance further necessitates the need to derive the confidence estimate of the network's predictions. In line with this thinking, we introduce an Uncertainty Quantification(UQ) technique to quantify the uncertainty in the predictions from a pre-trained vision model. We subsequently leverage this information to extract valuable predictions while ignoring the non-confident predictions. We observe that our technique saves up to 80% of ignored samples from being misclassified. The code for the same is available here.

Technological innovation plays a crucial role in driving economic growth and development. In this study, we investigate the extent to which technological innovation contributes to a more sustainable future and fosters entrepreneurship. To examine this, we focus on robotic process automation (RPA) highly relevant technology. We conducted a comprehensive analysis by examining the usage of RPA and its impact on environmental, social, and governance (ESG) factors. Our research involved gathering data from the 300 largest companies in terms of market capitalization. We assessed whether these companies used RPA and obtained their corresponding ESG ratings. To investigate the relationship between RPA and ESG, we employed a contingency table analysis, which involved categorizing the data based on ESG ratings. We further used Pearson's Chi-square Test of Independence to assess the impact of RPA on ESG. Our findings revealed a statistically significant association between RPA and ESG ratings, indicating their interconnection. The calculated value for Pearson's Chi-square Test of Independence was 6.54, with a corresponding p-value of 0.0381. This indicates that at a significance level of five percent, the RPA and ESG variables depend on each other. These results suggest that RPA, representative of modern technologies, likely influences the achievement of a sustainable future and the promotion of entrepreneurship. In conclusion, our study provides empirical evidence supporting the notion that technological innovations such as RPA have the potential to positively shape sustainability efforts and entrepreneurial endeavours.

This report summarizes the discussions and conclusions of a 2-day multidisciplinary workshop that brought together researchers and practitioners in healthcare, computer science, and social sciences to explore what lessons were learned and what actions, primarily in research, could be taken. One consistent observation was that there is significant merit in thinking not only about pandemic situations, but also about peacetime advances, as many healthcare networks and communities are now in a perpetual state of crisis. Attendees discussed how the COVID-19 pandemic amplified gaps in our health and computing systems, and how current and future computing technologies could fill these gaps and improve the trajectory of the next pandemic. Three major computing themes emerged from the workshop: models, data, and infrastructure. Computational models are extremely important during pandemics, from anticipating supply needs of hospitals, to determining the care capacity of hospital and social service providers, to projecting the spread of the disease. Accurate, reliable models can save lives, and inform community leaders on policy decisions. Health system users require accurate, reliable data to achieve success when applying models. This requires data and measurement standardization across health care organizations, modernizing the data infrastructure, and methods for ensuring data remains private while shared for model development, validation, and application. Finally, many health care systems lack the data, compute, and communication infrastructures required to build models on their data, use those models in ordinary operations, or even to reliably access their data. Robust and timely computing research has the potential to better support healthcare works to save lives in times of crisis (e.g., pandemics) and today during relative peacetime.

In this work, we investigate the online learning problem of revenue maximization in ad auctions, where the seller needs to learn the click-through rates (CTRs) of each ad candidate and charge the price of the winner through a pay-per-click manner. We focus on two models of the advertisers' strategic behaviors. First, we assume that the advertiser is completely myopic; i.e.~in each round, they aim to maximize their utility only for the current round. In this setting, we develop an online mechanism based on upper-confidence bounds that achieves a tight $O(\sqrt{T})$ regret in the worst-case and negative regret when the values are static across all the auctions and there is a gap between the highest expected value (i.e.~value multiplied by their CTR) and second highest expected value ad. Next, we assume that the advertiser is non-myopic and cares about their long term utility. This setting is much more complex since an advertiser is incentivized to influence the mechanism by bidding strategically in earlier rounds. In this setting, we provide an algorithm to achieve negative regret for the static valuation setting (with a positive gap), which is in sharp contrast with the prior work that shows $O(T^{2/3})$ regret when the valuation is generated by adversary.

In this work, we proposed a new dynamic distributed planning approach that is able to take into account the changes that the agent introduces on his set of actions to be planned in order to take into account the changes that occur in his environment. Our approach fits into the context of distributed planning for distributed plans where each agent can produce its own plans. According to our approach the generation of the plans is based on the satisfaction of the constraints by the use of the genetic algorithms. Our approach is to generate, a new plan by each agent, whenever there is a change in its set of actions to plan. This in order to take into account the new actions introduced in its new plan. In this new plan, the agent takes, each time, as a new action set to plan all the old un-executed actions of the old plan and the new actions engendered by the changes and as a new initial state; the state in which the set of actions of the agent undergoes a change. In our work, we used a concrete case to illustrate and demonstrate the utility of our approach.

In this paper, we propose the joint learning attention and recurrent neural network (RNN) models for multi-label classification. While approaches based on the use of either model exist (e.g., for the task of image captioning), training such existing network architectures typically require pre-defined label sequences. For multi-label classification, it would be desirable to have a robust inference process, so that the prediction error would not propagate and thus affect the performance. Our proposed model uniquely integrates attention and Long Short Term Memory (LSTM) models, which not only addresses the above problem but also allows one to identify visual objects of interests with varying sizes without the prior knowledge of particular label ordering. More importantly, label co-occurrence information can be jointly exploited by our LSTM model. Finally, by advancing the technique of beam search, prediction of multiple labels can be efficiently achieved by our proposed network model.

北京阿比特科技有限公司