亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Conscious states (states that there is something it is like to be in) seem both rich or full of detail, and ineffable or hard to fully describe or recall. The problem of ineffability, in particular, is a longstanding issue in philosophy that partly motivates the explanatory gap: the belief that consciousness cannot be reduced to underlying physical processes. Here, we provide an information theoretic dynamical systems perspective on the richness and ineffability of consciousness. In our framework, the richness of conscious experience corresponds to the amount of information in a conscious state and ineffability corresponds to the amount of information lost at different stages of processing. We describe how attractor dynamics in working memory would induce impoverished recollections of our original experiences, how the discrete symbolic nature of language is insufficient for describing the rich and high-dimensional structure of experiences, and how similarity in the cognitive function of two individuals relates to improved communicability of their experiences to each other. While our model may not settle all questions relating to the explanatory gap, it makes progress toward a fully physicalist explanation of the richness and ineffability of conscious experience: two important aspects that seem to be part of what makes qualitative character so puzzling.

相關內容

《計算機信息》雜志發表高質量的論文,擴大了運籌學和計算的范圍,尋求有關理論、方法、實驗、系統和應用方面的原創研究論文、新穎的調查和教程論文,以及描述新的和有用的軟件工具的論文。官網鏈接: · 正則化項 · 標注 · 經驗風險 · Learning ·
2023 年 8 月 14 日

Label Distribution Learning (LDL) assigns soft labels, a.k.a. degrees, to a sample. In reality, it is always laborious to obtain complete degrees, giving birth to the Incomplete LDL (InLDL). However, InLDL often suffers from performance degeneration. To remedy it, existing methods need one or more explicit regularizations, leading to burdensome parameter tuning and extra computation. We argue that label distribution itself may provide useful prior, when used appropriately, the InLDL problem can be solved without any explicit regularization. In this paper, we offer a rational alternative to use such a prior. Our intuition is that large degrees are likely to get more concern, the small ones are easily overlooked, whereas the missing degrees are completely neglected in InLDL. To learn an accurate label distribution, it is crucial not to ignore the small observed degrees but to give them properly large weights, while gradually increasing the weights of the missing degrees. To this end, we first define a weighted empirical risk and derive upper bounds between the expected risk and the weighted empirical risk, which reveals in principle that weighting plays an implicit regularization role. Then, by using the prior of degrees, we design a weighted scheme and verify its effectiveness. To sum up, our model has four advantages, it is 1) model selection free, as no explicit regularization is imposed; 2) with closed form solution (sub-problem) and easy-to-implement (a few lines of codes); 3) with linear computational complexity in the number of samples, thus scalable to large datasets; 4) competitive with state-of-the-arts even without any explicit regularization.

The stochastic gradient descent (SGD) algorithm is the algorithm we use to train neural networks. However, it remains poorly understood how the SGD navigates the highly nonlinear and degenerate loss landscape of a neural network. In this work, we prove that the minibatch noise of SGD regularizes the solution towards a balanced solution whenever the loss function contains a rescaling symmetry. Because the difference between a simple diffusion process and SGD dynamics is the most significant when symmetries are present, our theory implies that the loss function symmetries constitute an essential probe of how SGD works. We then apply this result to derive the stationary distribution of stochastic gradient flow for a diagonal linear network with arbitrary depth and width. The stationary distribution exhibits complicated nonlinear phenomena such as phase transitions, broken ergodicity, and fluctuation inversion. These phenomena are shown to exist uniquely in deep networks, implying a fundamental difference between deep and shallow models.

Individuals or companies in a large social or financial network often display rather heterogeneous behaviors for various reasons. In this work, we propose a network vector autoregressive model with a latent group structure to model heterogeneous dynamic patterns observed from network nodes, for which group-wise network effects and timeinvariant fixed-effects can be naturally incorporated. In our framework, the model parameters and network node memberships can be simultaneously estimated by minimizing a least-squares type objective function. In particular, our theoretical investigation allows the number of latent groups G to be over-specified when achieving the estimation consistency of the model parameters and group memberships, which significantly improves the robustness of the proposed approach. When G is correctly specified, valid statistical inference can be made for model parameters based on the asymptotic normality of the estimators. A data-driven criterion is developed to consistently identify the true group number for practical use. Extensive simulation studies and two real data examples are used to demonstrate the effectiveness of the proposed methodology.

We represent 3D shape by structured 2D representations of fixed length making it feasible to apply well investigated 2D convolutional neural networks (CNN) for both discriminative and geometric tasks on 3D shapes. We first provide a general introduction to such structured descriptors, analyze their different forms and show how a simple 2D CNN can be used to achieve good classification result. With a specialized classification network for images and our structured representation, we achieve the classification accuracy of 99.7\% in the ModelNet40 test set - improving the previous state-of-the-art by a large margin. We finally provide a novel framework for performing the geometric task of 3D segmentation using 2D CNNs and the structured representation - concluding the utility of such descriptors for both discriminative and geometric tasks.

We show that the essential properties of entropy (monotonicity, additivity and subadditivity) are consequences of entropy being a monoidal natural transformation from the under category functor $-/\mathsf{LProb}_{\rho}$ (where $\mathsf{LProb}_{\rho}$ is category of $\ell_{\rho}$ discrete probability spaces) to $\Delta_{\mathbb{R}}$. Moreover, the Shannon entropy can be characterized as the universal monoidal natural transformation from $-/\mathsf{LProb}_{\rho}$ to the category of "strongly regularly ordered" vector spaces (a reflective subcategory of the lax-slice 2-category over $\mathsf{MonCat}_{\ell}$ in the 2-category of monoidal categories), providing a succinct characterization of Shannon entropy as a reflection arrow. We can likewise define entropy for every category with a monoidal structure on its under categories (e.g. the category of finite abelian groups, the category of finite inhabited sets, the category of finite dimensional vector spaces, and the augmented simplex category) via the reflection arrow to the reflective subcategory of strongly regularly ordered vector spaces. This implies that all these entropies over different categories are components of a single natural transformation (the unit of the idempotent monad), allowing us to connect these entropies in a natural manner. We also provide a universal characterization of the conditional Shannon entropy based on the chain rule which, unlike the characterization of information loss by Baez, Fritz and Leinster, does not require any continuity assumption.

Motivated by a practical scenario in blockchains in which a client, who possesses a transaction, wishes to privately verify that the transaction actually belongs to a block, we investigate the problem of private retrieval of Merkle proofs (i.e. proofs of inclusion/membership) in a Merkle tree. In this setting, one or more servers store the nodes of a binary tree (a Merkle tree), while a client wants to retrieve the set of nodes along a root-to-leaf path (i.e. a Merkle proof, after appropriate node swapping operations), without letting the servers know which path is being retrieved. We propose a method that partitions the Merkle tree to enable parallel private retrieval of the Merkle proofs. The partitioning step is based on a novel tree coloring called ancestral coloring in which nodes that have ancestor-descendant relationship must have distinct colors. To minimize the retrieval time, the coloring is required to be balanced, i.e. the sizes of the color classes differ by at most one. We develop a fast algorithm to find a balanced (in fact, any) ancestral coloring in almost linear time in the number of tree nodes, which can handle trees with billions of nodes in a few minutes. Our partitioning method can be applied on top of any private information retrieval scheme, leading to the minimum storage overhead and fastest running times compared to existing approaches.

The generation and verification of quantum states are fundamental tasks for quantum information processing that have recently been investigated by Irani, Natarajan, Nirkhe, Rao and Yuen [CCC 2022], Rosenthal and Yuen [ITCS 2022], Metger and Yuen [FOCS 2023] under the term \emph{state synthesis}. This paper studies this concept from the viewpoint of quantum distributed computing, and especially distributed quantum Merlin-Arthur (dQMA) protocols. We first introduce a novel task, on a line, called state generation with distributed inputs (SGDI). In this task, the goal is to generate the quantum state $U\ket{\psi}$ at the rightmost node of the line, where $\ket{\psi}$ is a quantum state given at the leftmost node and $U$ is a unitary matrix whose description is distributed over the nodes of the line. We give a dQMA protocol for SGDI and utilize this protocol to construct a dQMA protocol for the Set Equality problem studied by Naor, Parter and Yogev [SODA 2020], and complement our protocol by showing classical lower bounds for this problem. Our second contribution is a dQMA protocol, based on a recent work by Zhu and Hayashi [Physical Review A, 2019], to create EPR-pairs between adjacent nodes of a network without quantum communication. As an application of this dQMA protocol, we prove a general result showing how to convert any dQMA protocol on an arbitrary network into another dQMA protocol where the verification stage does not require any quantum communication.

Philosophical research in AI has hitherto largely focused on the ethics of AI. In this paper we, an ethicist of belief and a machine learning scientist, suggest that we need to pursue a novel area of philosophical research in AI - the epistemology of AI, and in particular an ethics of belief for AI. Here we take the ethics of belief, a field that has been defined in various ways, to refer to a sub-field within epistemology. This subfield is concerned with the study of possible moral, practical, and other non-alethic dimensions of belief. And in this paper, we will primarily be concerned with the normative question within the ethics of belief regarding what agents - both human and artificial - ought to believe, rather than with descriptive questions concerning whether certain beliefs meet various evaluative standards such as being true, being justified or warranted, constituting knowledge, and so on. We suggest four topics in extant work in the ethics of (human) belief that can be applied to an ethics of AI belief: doxastic wronging by AI; morally owed beliefs; pragmatic and moral encroachment on AI beliefs; and moral responsibility for AI beliefs. We also indicate two relatively nascent areas of philosophical research that haven't yet been generally recognized as ethics of AI belief research, but that do fall within this field of research in virtue of investigating various moral and practical dimensions of belief: the epistemic and ethical decolonization of AI; and epistemic injustice in AI.

When is heterogeneity in the composition of an autonomous robotic team beneficial and when is it detrimental? We investigate and answer this question in the context of a minimally viable model that examines the role of heterogeneous speeds in perimeter defense problems, where defenders share a total allocated speed budget. We consider two distinct problem settings and develop strategies based on dynamic programming and on local interaction rules. We present a theoretical analysis of both approaches and our results are extensively validated using simulations. Interestingly, our results demonstrate that the viability of heterogeneous teams depends on the amount of information available to the defenders. Moreover, our results suggest a universality property: across a wide range of problem parameters the optimal ratio of the speeds of the defenders remains nearly constant.

The era of big data provides researchers with convenient access to copious data. However, people often have little knowledge about it. The increasing prevalence of big data is challenging the traditional methods of learning causality because they are developed for the cases with limited amount of data and solid prior causal knowledge. This survey aims to close the gap between big data and learning causality with a comprehensive and structured review of traditional and frontier methods and a discussion about some open problems of learning causality. We begin with preliminaries of learning causality. Then we categorize and revisit methods of learning causality for the typical problems and data types. After that, we discuss the connections between learning causality and machine learning. At the end, some open problems are presented to show the great potential of learning causality with data.

北京阿比特科技有限公司