亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Algorithm aversion occurs when humans are reluctant to use algorithms despite their superior performance. Studies show that giving users outcome control by providing agency over how models' predictions are incorporated into decision-making mitigates algorithm aversion. We study whether algorithm aversion is mitigated by process control, wherein users can decide what input factors and algorithms to use in model training. We conduct a replication study of outcome control, and test novel process control study conditions on Amazon Mechanical Turk (MTurk) and Prolific. Our results partly confirm prior findings on the mitigating effects of outcome control, while also forefronting reproducibility challenges. We find that process control in the form of choosing the training algorithm mitigates algorithm aversion, but changing inputs does not. Furthermore, giving users both outcome and process control does not reduce algorithm aversion more than outcome or process control alone. This study contributes to design considerations around mitigating algorithm aversion.

相關內容

Communication compression is an essential strategy for alleviating communication overhead by reducing the volume of information exchanged between computing nodes in large-scale distributed stochastic optimization. Although numerous algorithms with convergence guarantees have been obtained, the optimal performance limit under communication compression remains unclear. In this paper, we investigate the performance limit of distributed stochastic optimization algorithms employing communication compression. We focus on two main types of compressors, unbiased and contractive, and address the best-possible convergence rates one can obtain with these compressors. We establish the lower bounds for the convergence rates of distributed stochastic optimization in six different settings, combining strongly-convex, generally-convex, or non-convex functions with unbiased or contractive compressor types. To bridge the gap between lower bounds and existing algorithms' rates, we propose NEOLITHIC, a nearly optimal algorithm with compression that achieves the established lower bounds up to logarithmic factors under mild conditions. Extensive experimental results support our theoretical findings. This work provides insights into the theoretical limitations of existing compressors and motivates further research into fundamentally new compressor properties.

We present a virtual reality (VR) framework to automate the data collection process in cloth folding tasks. The framework uses skeleton representations to help the user define the folding plans for different classes of garments, allowing for replicating the folding on unseen items of the same class. We evaluate the framework in the context of automating garment folding tasks. A quantitative analysis is performed on 3 classes of garments, demonstrating that the framework reduces the need for intervention by the user. We also compare skeleton representations with RGB and binary images in a classification task on a large dataset of clothing items, motivating the use of the framework for other classes of garments.

In conventional backscatter communication (BackCom) systems, time division multiple access (TDMA) and frequency division multiple access (FDMA) are generally adopted for multiuser backscattering due to their simplicity in implementation. However, as the number of backscatter devices (BDs) proliferates, there will be a high overhead under the traditional centralized control techniques, and the inter-user coordination is unaffordable for the passive BDs, which are of scarce concern in existing works and remain unsolved. To this end, in this paper, we propose a slotted ALOHA-based random access for BackCom systems, in which each BD is randomly chosen and is allowed to coexist with one active device for hybrid multiple access. To excavate and evaluate the performance, a resource allocation problem for max-min transmission rate is formulated, where transmit antenna selection, receive beamforming design, reflection coefficient adjustment, power control, and access probability determination are jointly considered. To deal with this intractable problem, we first transform the objective function with the max-min form into an equivalent linear one, and then decompose the resulting problem into three sub-problems. Next, a block coordinate descent (BCD)-based greedy algorithm with a penalty function, successive convex approximation, and linear programming are designed to obtain sub-optimal solutions for tractable analysis. Simulation results demonstrate that the proposed algorithm outperforms benchmark algorithms in terms of transmission rate and fairness.

The {\em binary deletion channel} with deletion probability $d$ ($\text{BDC}_d$) is a random channel that deletes each bit of the input message i.i.d with probability $d$. It has been studied extensively as a canonical example of a channel with synchronization errors. Perhaps the most important question regarding the BDC is determining its capacity. Mitzenmacher and Drinea (ITIT 2006) and Kirsch and Drinea (ITIT 2009) show a method by which distributions on run lengths can be converted to codes for the BDC, yielding a lower bound of $\mathcal{C}(\text{BDC}_d) > 0.1185 \cdot (1-d)$. Fertonani and Duman (ITIT 2010), Dalai (ISIT 2011) and Rahmati and Duman (ITIT 2014) use computer aided analyses based on the Blahut-Arimoto algorithm to prove an upper bound of $\mathcal{C}(\text{BDC}_d) < 0.4143\cdot(1-d)$ in the high deletion probability regime ($d > 0.65$). In this paper, we show that the Blahut-Arimoto algorithm can be implemented with a lower space complexity, allowing us to extend the upper bound analyses, and prove an upper bound of $\mathcal{C}(\text{BDC}_d) < 0.3745 \cdot(1-d)$ for all $d \geq 0.68$. Furthermore, we show that an extension of the Blahut-Arimoto algorithm can also be used to select better run length distributions for Mitzenmacher and Drinea's construction, yielding a lower bound of $\mathcal{C}(\text{BDC}_d) > 0.1221 \cdot (1 - d)$.

Real-time perception and motion planning are two crucial tasks for autonomous driving. While there are many research works focused on improving the performance of perception and motion planning individually, it is still not clear how a perception error may adversely impact the motion planning results. In this work, we propose a joint simulation framework with LiDAR-based perception and motion planning for real-time automated driving. Taking the sensor input from the CARLA simulator with additive noise, a LiDAR perception system is designed to detect and track all surrounding vehicles and to provide precise orientation and velocity information. Next, we introduce a new collision bound representation that relaxes the communication cost between the perception module and the motion planner. A novel collision checking algorithm is implemented using line intersection checking that is more efficient for long distance range in comparing to the traditional method of occupancy grid. We evaluate the joint simulation framework in CARLA for urban driving scenarios. Experiments show that our proposed automated driving system can execute at 25 Hz, which meets the real-time requirement. The LiDAR perception system has high accuracy within 20 meters when evaluated with the ground truth. The motion planning results in consistent safe distance keeping when tested in CARLA urban driving scenarios.

For simulations of time-evolution problems, such as weather and climate models, taking the largest stable time-step is advantageous for reducing the wall-clock time. We propose methods for studying the effect of linear dispersive errors on the time-stepping accuracy of nonlinear problems. We demonstrate an application of this to the Rotating Shallow Water Equations (RSWEs). To begin, a nonlinear time-stepping `triadic error' metric is constructed from three-wave interactions. Stability polynomials, obtained from the oscillatory Dahlquist test equation, enable the computation of triadic errors for different time-steppers; we compare five classical schemes. We next provide test cases comparing different time-step sizes within a numerical model. The first case is of a reforming Gaussian height perturbation. This contains a nonlinear phase shift that can be missed with a large time-step. The second set of test cases initialise individual waves to allow specific triads to form. The presence of a slow transition from linear to nonlinear dynamics creates a good venue for testing how the slow phase information is replicated with a large time-step. Three models, including the finite element code Gusto, and the MetOffice's new LFRic model, are examined in these test cases with different time-steppers.

This paper establishes the fundamental limits of a two-user single-receiver system where communication from User 1 (but not from User 2) needs to be undetectable to an external warden. Our fundamental limits show a tradeoff between the highest rates (or square-root rates) that are simultaneously achievable for the two users. Moreover, coded time-sharing for both users is fundamentally required on most channels, which distinguishes this setup from the more classical setups with either only covert users or only non-covert users. Interestingly, the presence of a non-covert user can be beneficial for improving the covert capacity of the other user.

In this paper, we put forward the model of zero-error distributed function compression system of two binary memoryless sources X and Y, where there are two encoders En1 and En2 and one decoder De, connected by two channels (En1, De) and (En2, De) with the capacity constraints C1 and C2, respectively. The encoder En1 can observe X or (X,Y) and the encoder En2 can observe Y or (X,Y) according to the two switches s1 and s2 open or closed (corresponding to taking values 0 or 1). The decoder De is required to compress the binary arithmetic sum f(X,Y)=X+Y with zero error by using the system multiple times. We use (s1s2;C1,C2;f) to denote the model in which it is assumed that C1 \geq C2 by symmetry. The compression capacity for the model is defined as the maximum average number of times that the function f can be compressed with zero error for one use of the system, which measures the efficiency of using the system. We fully characterize the compression capacities for all the four cases of the model (s1s2;C1,C2;f) for s1s2= 00,01,10,11. Here, the characterization of the compression capacity for the case (01;C1,C2;f) with C1>C2 is highly nontrivial, where a novel graph coloring approach is developed. Furthermore, we apply the compression capacity for (01;C1,C2;f) to an open problem in network function computation that whether the best known upper bound of Guang et al. on computing capacity is in general tight.

When an exposure of interest is confounded by unmeasured factors, an instrumental variable (IV) can be used to identify and estimate certain causal contrasts. Identification of the marginal average treatment effect (ATE) from IVs relies on strong untestable structural assumptions. When one is unwilling to assert such structure, IVs can nonetheless be used to construct bounds on the ATE. Famously, Balke and Pearl (1997) proved tight bounds on the ATE for a binary outcome, in a randomized trial with noncompliance and no covariate information. We demonstrate how these bounds remain useful in observational settings with baseline confounders of the IV, as well as randomized trials with measured baseline covariates. The resulting bounds on the ATE are non-smooth functionals, and thus standard nonparametric efficiency theory is not immediately applicable. To remedy this, we propose (1) under a novel margin condition, influence function-based estimators of the bounds that can attain parametric convergence rates when the nuisance functions are modeled flexibly, and (2) estimators of smooth approximations of these bounds. We propose extensions to continuous outcomes, explore finite sample properties in simulations, and illustrate the proposed estimators in a randomized field experiment studying the effects of canvassing on resulting voter turnout.

Command, Control, Communication, and Intelligence (C3I) system is a kind of system-of-system that integrates computing machines, sensors, and communication networks. C3I systems are increasingly used in critical civil and military operations for achieving information superiority, assurance, and operational efficacy. C3I systems are no exception to the traditional systems facing widespread cyber-threats. However, the sensitive nature of the application domain (e.g., military operations) of C3I systems makes their security a critical concern. For instance, a cyber-attack on military installations can have detrimental impacts on national security. Therefore, in this paper, we review the state-of-the-art on the security of C3I systems. In particular, this paper aims to identify the security vulnerabilities, attack vectors, and countermeasures for C3I systems. We used the well-known systematic literature review method to select and review 77 studies on the security of C3I systems. Our review enabled us to identify 27 vulnerabilities, 22 attack vectors, and 62 countermeasures for C3I systems. This review has also revealed several areas for future research and identified key lessons with regards to C3I systems' security.

北京阿比特科技有限公司