亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A query algorithm based on homomorphism counts is a procedure for determining whether a given instance satisfies a property by counting homomorphisms between the given instance and finitely many predetermined instances. In a left query algorithm, we count homomorphisms from the predetermined instances to the given instance, while in a right query algorithm we count homomorphisms from the given instance to the predetermined instances. Homomorphisms are usually counted over the semiring N of non-negative integers; it is also meaningful, however, to count homomorphisms over the Boolean semiring B, in which case the homomorphism count indicates whether or not a homomorphism exists. We first characterize the properties that admit a left query algorithm over B by showing that these are precisely the properties that are both first-order definable and closed under homomorphic equivalence. After this, we turn attention to a comparison between left query algorithms over B and left query algorithms over N. In general, there are properties that admit a left query algorithm over N but not over B. The main result of this paper asserts that if a property is closed under homomorphic equivalence, then that property admits a left query algorithm over B if and only if it admits a left query algorithm over N. In other words and rather surprisingly, homomorphism counts over N do not help as regards properties that are closed under homomorphic equivalence. Finally, we characterize the properties that admit both a left query algorithm over B and a right query algorithm over B.

相關內容

Advances in AI invite the misuse of language models as stand-ins for human minds or participants, which fundamentally mischaracterizes these statistical algorithms. We argue that language models should be embraced as flexible simulation tools, able to mimic a wide range of behaviors, perspectives, and psychological attributes evident in human language data, but the models themselves should not be equated to or anthropomorphized as human minds.

Both the Bayes factor and the relative belief ratio satisfy the principle of evidence and so can be seen to be valid measures of statistical evidence. Certainly Bayes factors are regularly employed. The question then is: which of these measures of evidence is more appropriate? It is argued here that there are questions concerning the validity of a current commonly used definition of the Bayes factor based on a mixture prior and, when all is considered, the relative belief ratio has better properties as a measure of evidence. It is further shown that, when a natural restriction on the mixture prior is imposed, the Bayes factor equals the relative belief ratio obtained without using the mixture prior. Even with this restriction, this still leaves open the question of how the strength of evidence is to be measured. It is argued here that the current practice of using the size of the Bayes factor to measure strength is not correct and a solution to this issue is presented. Several general criticisms of these measures of evidence are also discussed and addressed.

Since the 1990s, recombinant growth theory has fascinated academic circles by proposing that new ideas flourish through reconfiguring existing ones, leading to accelerated innovation in science and technology. However, after three decades, a marked decline in scientific breakthroughs challenges this theory. We explore its potential limitations, suggesting that while it emphasizes complementarity among ideas, it overlooks the competitive dynamics between them and how this rivalry fosters major breakthroughs. Examining 20 scientific breakthroughs nominated by surveyed scientists, we reveal a recurring pattern where new ideas are intentionally crafted to challenge and replace established ones. Analyzing 19 million papers spanning a century, we consistently observe a negative correlation between reference atypicality, which reflects the effort to recombine more ideas, and paper disruption, indicating the extent to which this work represents major breakthroughs, across all fields, periods, and team sizes. Moreover, our analysis of a novel dataset, comparing early and subsequent versions of 2,461 papers, offers quasi-experimental evidence suggesting that additional efforts to increase reference atypicality indeed result in a reduction of disruption for the same paper. In summary, our analyses challenge recombinant growth theory, suggesting that scientific breakthroughs originate from a clear purpose to replace established, impactful ideas.

Marginal structural models have been widely used in causal inference to estimate mean outcomes under either a static or a prespecified set of treatment decision rules. This approach requires imposing a working model for the mean outcome given a sequence of treatments and possibly baseline covariates. In this paper, we introduce a dynamic marginal structural model that can be used to estimate an optimal decision rule within a class of parametric rules. Specifically, we will estimate the mean outcome as a function of the parameters in the class of decision rules, referred to as a regimen-response curve. In general, misspecification of the working model may lead to a biased estimate with questionable causal interpretability. To mitigate this issue, we will leverage risk to assess "goodness-of-fit" of the imposed working model. We consider the counterfactual risk as our target parameter and derive inverse probability weighting and canonical gradients to map it to the observed data. We provide asymptotic properties of the resulting risk estimators, considering both fixed and data-dependent target parameters. We will show that the inverse probability weighting estimator can be efficient and asymptotic linear when the weight functions are estimated using a sieve-based estimator. The proposed method is implemented on the LS1 study to estimate a regimen-response curve for patients with Parkinson's disease.

Principal component analysis (PCA) is a longstanding and well-studied approach for dimension reduction. It rests upon the assumption that the underlying signal in the data has low rank, and thus can be well-summarized using a small number of dimensions. The output of PCA is typically represented using a scree plot, which displays the proportion of variance explained (PVE) by each principal component. While the PVE is extensively reported in routine data analyses, to the best of our knowledge the notion of inference on the PVE remains unexplored. In this paper, we consider inference on the PVE. We first introduce a new population quantity for the PVE with respect to an unknown matrix mean. Critically, our interest lies in the PVE of the sample principal components (as opposed to unobserved population principal components); thus, the population PVE that we introduce is defined conditional on the sample singular vectors. We show that it is possible to conduct inference, in the sense of confidence intervals, p-values, and point estimates, on this population quantity. Furthermore, we can conduct valid inference on the PVE of a subset of the principal components, even when the subset is selected using a data-driven approach such as the elbow rule. We demonstrate the proposed approach in simulation and in an application to a gene expression dataset.

The Laplace eigenvalue problem on circular sectors has eigenfunctions with corner singularities. Standard methods may produce suboptimal approximation results. To address this issue, a novel numerical algorithm that enhances standard isogeometric analysis is proposed in this paper by using a single-patch graded mesh refinement scheme. Numerical tests demonstrate optimal convergence rates for both the eigenvalues and eigenfunctions. Furthermore, the results show that smooth splines possess a superior approximation constant compared to their $C^0$-continuous counterparts for the lower part of the Laplace spectrum. This is an extension of previous findings about excellent spectral approximation properties of smooth splines on rectangular domains to circular sectors. In addition, graded meshes prove to be particularly advantageous for an accurate approximation of a limited number of eigenvalues. The novel algorithm applied here has a drawback in the singularity of the isogeometric parameterization. It results in some basis functions not belonging to the solution space of the corresponding weak problem, which is considered a variational crime. However, the approach proves to be robust. Finally, a hierarchical mesh structure is presented to avoid anisotropic elements, omit redundant degrees of freedom and keep the number of basis functions contributing to the variational crime constant, independent of the mesh size. Numerical results validate the effectiveness of hierarchical mesh grading for the simulation of eigenfunctions with and without corner singularities.

This essay provides a comprehensive analysis of the optimization and performance evaluation of various routing algorithms within the context of computer networks. Routing algorithms are critical for determining the most efficient path for data transmission between nodes in a network. The efficiency, reliability, and scalability of a network heavily rely on the choice and optimization of its routing algorithm. This paper begins with an overview of fundamental routing strategies, including shortest path, flooding, distance vector, and link state algorithms, and extends to more sophisticated techniques.

This paper evaluates AI from concrete to Abstract (Queiroz et al. 2021), a recently proposed method that enables awareness among the general public on machine learning. Such is possible due to the use of WiSARD, an easily understandable machine learning mechanism, thus requiring little effort and no technical background from the target users. WiSARD is adherent to digital computing; training consists of writing to RAM-type memories, and classification consists of reading from these memories. The model enables easy visualization and understanding of training and classification tasks' internal realization through ludic activities. Furthermore, the WiSARD model does not require an Internet connection for training and classification, and it can learn from a few or one example. WiSARD can also create "mental images" of what it has learned so far, evidencing key features pertaining to a given class. The AIcon2abs method's effectiveness was assessed through the evaluation of a remote course with a workload of approximately 6 hours. It was completed by thirty-four Brazilian subjects: 5 children between 8 and 11 years old; 5 adolescents between 12 and 17 years old; and 24 adults between 21 and 72 years old. The collected data was analyzed from two perspectives: (i) from the perspective of a pre-experiment (of a mixed methods nature) and (ii) from a phenomenological perspective (of a qualitative nature). AIcon2abs was well-rated by almost 100% of the research subjects, and the data collected revealed quite satisfactory results concerning the intended outcomes. This research has been approved by the CEP/HUCFF/FM/UFRJ Human Research Ethics Committee.

Living organisms interact with their surroundings in a closed-loop fashion, where sensory inputs dictate the initiation and termination of behaviours. Even simple animals are able to develop and execute complex plans, which has not yet been replicated in robotics using pure closed-loop input control. We propose a solution to this problem by defining a set of discrete and temporary closed-loop controllers, called "tasks", each representing a closed-loop behaviour. We further introduce a supervisory module which has an innate understanding of physics and causality, through which it can simulate the execution of task sequences over time and store the results in a model of the environment. On the basis of this model, plans can be made by chaining temporary closed-loop controllers. The proposed framework was implemented for a real robot and tested in two scenarios as proof of concept.

A recent trend in mathematical modeling is to publish the computer code together with the research findings. Here we explore the formal question, whether and in which sense a computer implementation is distinct from the mathematical model. We argue that, despite the convenience of implemented models, a set of implicit assumptions is perpetuated with the implementation to the extent that even in widely used models the causal link between the (formal) mathematical model and the set of results is no longer certain. Moreover, code publication is often seen as an important contributor to reproducible research, we suggest that in some cases the opposite may be true. A new perspective on this topic stems from the accelerating trend that in some branches of research only implemented models are used, e.g., in artificial intelligence (AI). With the advent of quantum computers we argue that completely novel challenges arise in the distinction between models and implementations.

北京阿比特科技有限公司