亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The quest to understand consciousness, once the purview of philosophers and theologians, is now actively pursued by scientists of many stripes. We examine consciousness from the perspective of theoretical computer science (TCS), a branch of mathematics concerned with understanding the underlying principles of computation and complexity, including the implications and surprising consequences of resource limitations. In the spirit of Alan Turing's simple yet powerful definition of a computer, the Turing Machine (TM), and perspective of computational complexity theory, we formalize a modified version of the Global Workspace Theory (GWT) of consciousness originated by cognitive neuroscientist Bernard Baars and further developed by him, Stanislas Dehaene, Jean-Pierre Changeaux and others. We are not looking for a complex model of the brain nor of cognition, but for a simple computational model of (the admittedly complex concept of) consciousness. We do this by defining the Conscious Turing Machine (CTM), also called a conscious AI, and then we define consciousness and related notions in the CTM. While these are only mathematical (TCS) definitions, we suggest why the CTM has the feeling of consciousness. The TCS perspective provides a simple formal framework to employ tools from computational complexity theory and machine learning to help us understand consciousness and related concepts. Previously we explored high level explanations for the feelings of pain and pleasure in the CTM. Here we consider three examples related to vision (blindsight, inattentional blindness, and change blindness), followed by discussions of dreams, free will, and altered states of consciousness.

相關內容

它的目的是理解計算的本質,并因此提供更有效的方法。所有介紹或研究數學、邏輯和形式概念和方法的論文都是受歡迎的,前提是它們的動機顯然來自計算領域。理論計算機科學發表的論文按其性質分為三個部分。第一部分“算法,自動機,復雜性和游戲”致力于研究算法及其復雜性,使用分析,組合或概率的方法。它包括抽象復雜性的整個領域(即,所有可以使用圖靈機器定義的層次結構的結果)、自動機和語言理論的整個領域(包括無限詞和無限語言的自動機),整個幾何(圖形)應用領域和使用統計方法測量系統性能的整個領域。官網鏈接: · Analysis · 周期的 · 傳感器 · 講稿 ·
2022 年 6 月 6 日

Real-time motion tracking of kinematic chains is a key prerequisite in the control of, e.g., robotic actuators and autonomous vehicles and also has numerous biomechanical applications. In recent years, it has been shown that, by placing inertial sensors on segments that are connected by rotational joints, the motion of that kinematic chain can be tracked accurately. These methods specifically avoid using magnetometer measurements, which are known to be unreliable since the magnetic field at the different sensor locations is typically different. They rely on the assumption that the motion of the kinematic chain is sufficiently rich to assure observability of the relative pose. However, a formal investigation of this crucial requirement has not yet been presented, and no specific conditions for observability have so far been given. In this work, we present an observability analysis and show that the relative pose of the body segments is indeed observable under a very mild condition on the motion. We support our results by simulation studies, in which we employ a state estimator that neither uses magnetometer measurements nor additional sensors and does not impose assumptions on the accelerometer to measure only the direction of gravity, nor on the range of motion or degrees of freedom of the joints. We investigate the effect of the amount of excitation and of stationary periods in the data on the accuracy of the estimates. We then use experimental data from two mechanical joints as well as from a human gait experiment to validate the observability criterion in practice and to show that small excitation levels are sufficient for obtaining accurate estimates even in the presence of time periods during which the motion is not observable.

We refer by threshold Ornstein-Uhlenbeck to a continuous-time threshold autoregressive process. It follows the Ornstein-Uhlenbeck dynamics when above or below a fixed level, yet at this level (threshold) its coefficients can be discontinuous. We discuss (quasi)-maximum likelihood estimation of the drift parameters, both assuming continuous and discrete time observations. In the ergodic case, we derive consistency and speed of convergence of these estimators in long time and high frequency. Based on these results, we develop a test for the presence of a threshold in the dynamics. Finally, we apply these statistical tools to short-term US interest rates modeling.

This paper presents an efficient learning-based method to solve the inverse kinematic (IK) problem on soft robots with highly non-linear deformation. The major challenge of efficiently computing IK for such robots is due to the lack of analytical formulation for either forward or inverse kinematics. To address this challenge, we employ neural networks to learn both the mapping function of forward kinematics and also the Jacobian of this function. As a result, Jacobian-based iteration can be applied to solve the IK problem. A sim-to-real training transfer strategy is conducted to make this approach more practical. We first generate a large number of samples in a simulation environment for learning both the kinematic and the Jacobian networks of a soft robot design. Thereafter, a sim-to-real layer of differentiable neurons is employed to map the results of simulation to the physical hardware, where this sim-to-real layer can be learned from a very limited number of training samples generated on the hardware. The effectiveness of our approach has been verified on pneumatic-driven soft robots for path following and interactive positioning.

Shortly after it was first introduced in 2006, differential privacy became the flagship data privacy definition. Since then, numerous variants and extensions were proposed to adapt it to different scenarios and attacker models. In this work, we propose a systematic taxonomy of these variants and extensions. We list all data privacy definitions based on differential privacy, and partition them into seven categories, depending on which aspect of the original definition is modified. These categories act like dimensions: variants from the same category cannot be combined, but variants from different categories can be combined to form new definitions. We also establish a partial ordering of relative strength between these notions by summarizing existing results. Furthermore, we list which of these definitions satisfy some desirable properties, like composition, post-processing, and convexity by either providing a novel proof or collecting existing ones.

An important characteristic of neural networks is their ability to learn representations of the input data with effective features for prediction, which is believed to be a key factor to their superior empirical performance. To better understand the source and benefit of feature learning in neural networks, we consider learning problems motivated by practical data, where the labels are determined by a set of class relevant patterns and the inputs are generated from these along with some background patterns. We prove that neural networks trained by gradient descent can succeed on these problems. The success relies on the emergence and improvement of effective features, which are learned among exponentially many candidates efficiently by exploiting the data (in particular, the structure of the input distribution). In contrast, no linear models on data-independent features of polynomial sizes can learn to as good errors. Furthermore, if the specific input structure is removed, then no polynomial algorithm in the Statistical Query model can learn even weakly. These results provide theoretical evidence showing that feature learning in neural networks depends strongly on the input structure and leads to the superior performance. Our preliminary experimental results on synthetic and real data also provide positive support.

There is a widespread assumption that the peak velocities of visually guided saccades in the dark are up to 10~\% slower than those made in the light. Studies that questioned the impact of the surrounding brightness conditions, come to differing conclusions, whether they have an influence or not and if so, in which manner. The problem is of a complex nature as the illumination condition itself may not contribute to different measured peak velocities solely but in combination with the estimation of the pupil size due to its deformation during saccades or different gaze positions. Even the measurement technique of video-based eye tracking itself could play a significant role. To investigate this issue, we constructed a stepper motor driven artificial eye with fixed pupil size to mimic human saccades with predetermined peak velocity \& amplitudes under three different brightness conditions with the EyeLink 1000, one of the most common used eye trackers. The aim was to control the pupil and brightness. With our device, an overall good accuracy and precision of the EyeLink 1000 could be confirmed. Furthermore, we could find that there is no artifact for pupil based eye tracking in relation to changing brightness conditions, neither for the pupil size nor for the peak velocities. What we found, was a systematic, small, yet significant change of the measured pupil sizes as a function of different gaze directions.

Interpretability methods are developed to understand the working mechanisms of black-box models, which is crucial to their responsible deployment. Fulfilling this goal requires both that the explanations generated by these methods are correct and that people can easily and reliably understand them. While the former has been addressed in prior work, the latter is often overlooked, resulting in informal model understanding derived from a handful of local explanations. In this paper, we introduce explanation summary (ExSum), a mathematical framework for quantifying model understanding, and propose metrics for its quality assessment. On two domains, ExSum highlights various limitations in the current practice, helps develop accurate model understanding, and reveals easily overlooked properties of the model. We also connect understandability to other properties of explanations such as human alignment, robustness, and counterfactual minimality and plausibility.

In the past few decades, artificial intelligence (AI) technology has experienced swift developments, changing everyone's daily life and profoundly altering the course of human society. The intention of developing AI is to benefit humans, by reducing human labor, bringing everyday convenience to human lives, and promoting social good. However, recent research and AI applications show that AI can cause unintentional harm to humans, such as making unreliable decisions in safety-critical scenarios or undermining fairness by inadvertently discriminating against one group. Thus, trustworthy AI has attracted immense attention recently, which requires careful consideration to avoid the adverse effects that AI may bring to humans, so that humans can fully trust and live in harmony with AI technologies. Recent years have witnessed a tremendous amount of research on trustworthy AI. In this survey, we present a comprehensive survey of trustworthy AI from a computational perspective, to help readers understand the latest technologies for achieving trustworthy AI. Trustworthy AI is a large and complex area, involving various dimensions. In this work, we focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being. For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems. We also discuss the accordant and conflicting interactions among different dimensions and discuss potential aspects for trustworthy AI to investigate in the future.

This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.

Reinforcement learning is one of the core components in designing an artificial intelligent system emphasizing real-time response. Reinforcement learning influences the system to take actions within an arbitrary environment either having previous knowledge about the environment model or not. In this paper, we present a comprehensive study on Reinforcement Learning focusing on various dimensions including challenges, the recent development of different state-of-the-art techniques, and future directions. The fundamental objective of this paper is to provide a framework for the presentation of available methods of reinforcement learning that is informative enough and simple to follow for the new researchers and academics in this domain considering the latest concerns. First, we illustrated the core techniques of reinforcement learning in an easily understandable and comparable way. Finally, we analyzed and depicted the recent developments in reinforcement learning approaches. My analysis pointed out that most of the models focused on tuning policy values rather than tuning other things in a particular state of reasoning.

北京阿比特科技有限公司