亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

JADE is an educational game we have imagined, designed, built, and used successfully in various contexts. This board game enables learning and practicing software ergonomics concepts. It is intended for beginners. We use it every year during several hours with our second-year computer science students at Lyon 1 University. In this paper, we present the classical version of the game, as well as the design and evaluation process that we applied. We also present the hybrid version of JADE, which relies on the use of QR codes and videos. We also present its use in our teaching (with about 850 learners for a total duration of 54 hours, which totals more than 2500 student-hours). We then discuss the results obtained and present the considered evolutions.

相關內容

一個模板引擎. 在Node.js中比較常見.

We consider two decision problems in infinite groups. The first problem is Subgroup Intersection: given two finitely generated subgroups $\langle \mathcal{G} \rangle, \langle \mathcal{H} \rangle$ of a group $G$, decide whether the intersection $\langle \mathcal{G} \rangle \cap \langle \mathcal{H} \rangle$ is trivial. The second problem is Coset Intersection: given two finitely generated subgroups $\langle \mathcal{G} \rangle, \langle \mathcal{H} \rangle$ of a group $G$, as well as elements $g, h \in G$, decide whether the intersection of the two cosets $g \langle \mathcal{G} \rangle \cap h \langle \mathcal{H} \rangle$ is empty. We show that both problems are decidable in finitely generated abelian-by-cyclic groups. In particular, we reduce them to the Shifted Monomial Membership problem (whether an ideal of the Laurent polynomial ring over integers contains any element of the form $X^z - f,\; z \in \mathbb{Z} \setminus \{0\}$). We also point out some obstacles for generalizing these results from abelian-by-cyclic groups to arbitrary metabelian groups.

We introduce the probabilistic two-agent justification logic IPJ, a logic in which we can reason about agents that perform interactive proofs. In order to study the growth rate of the probabilities in IPJ, we present a new method of parametrising IPJ over certain negligible functions. Further, our approach leads to a new notion of zero-knowledge proofs.

The Linguistic Matrix Theory programme introduced by Kartsaklis, Ramgoolam and Sadrzadeh is an approach to the statistics of matrices that are generated in type-driven distributional semantics, based on permutation invariant polynomial functions which are regarded as the key observables encoding the significant statistics. In this paper we generalize the previous results on the approximate Gaussianity of matrix distributions arising from compositional distributional semantics. We also introduce a geometry of observable vectors for words, defined by exploiting the graph-theoretic basis for the permutation invariants and the statistical characteristics of the ensemble of matrices associated with the words. We describe successful applications of this unified framework to a number of tasks in computational linguistics, associated with the distinctions between synonyms, antonyms, hypernyms and hyponyms.

Software engineering researchers and practitioners have pursued manners to reduce the amount of time and effort required to develop code and increase productivity since the emergence of the discipline. Generative language models are just another step in this journey, but it will probably not be the last one. In this chapter, we propose DAnTE, a Degree of Automation Taxonomy for software Engineering, describing several levels of automation based on the idiosyncrasies of the field. Based on the taxonomy, we evaluated several tools used in the past and in the present for software engineering practices. Then, we give particular attention to AI-based tools, including generative language models, discussing how they are located within the proposed taxonomy, and reasoning about possible limitations they currently have. Based on this analysis, we discuss what novel tools could emerge in the middle and long term.

Behavioral experiments on the trust game have shown that trust and trustworthiness are universal among human beings, contradicting the prediction by assuming \emph{Homo economicus} in orthodox Economics. This means some mechanism must be at work that favors their emergence. Most previous explanations however need to resort to some factors based upon imitative learning, a simple version of social learning. Here, we turn to the paradigm of reinforcement learning, where individuals update their strategies by evaluating the long-term return through accumulated experience. Specifically, we investigate the trust game with the Q-learning algorithm, where each participant is associated with two evolving Q-tables that guide one's decision making as trustor and trustee respectively. In the pairwise scenario, we reveal that high levels of trust and trustworthiness emerge when individuals appreciate both their historical experience and returns in the future. Mechanistically, the evolution of the Q-tables shows a crossover that resembles human's psychological changes. We also provide the phase diagram for the game parameters, where the boundary analysis is conducted. These findings are robust when the scenario is extended to a latticed population. Our results thus provide a natural explanation for the emergence of trust and trustworthiness without external factors involved. More importantly, the proposed paradigm shows the potential in deciphering many puzzles in human behaviors.

This paper introduces a unified framework called cooperative extensive form games, which (i) generalizes standard non-cooperative games, and (ii) allows for more complex coalition formation dynamics than previous concepts like coalition-proof Nash equilibrium. Central to this framework is a novel solution concept called cooperative equilibrium system (CES). CES differs from Nash equilibrium in two important respects. First, a CES is immune to both unilateral and multilateral `credible' deviations. Second, unlike Nash equilibrium, whose stability relies on the assumption that the strategies of non-deviating players are held fixed, CES allows for the possibility that players may regroup and adjust their strategies in response to a deviation. The main result establishes that every cooperative extensive form game, possibly with imperfect information, possesses a CES. For games with perfect information, the proof is constructive. This framework is broadly applicable in contexts such as oligopolistic markets and dynamic political bargaining.

We propose mechanisms for a mathematical social-choice game that is designed to mediate decision-making processes for city planning, urban area redevelopment, and architectural design (massing) of urban housing complexes. The proposed game is effectively a multi-player generative configurator equipped with automated appraisal/scoring mechanisms for revealing the aggregate impact of alternatives; featuring a participatory digital process to support transparent and inclusive decision-making processes in spatial design for ensuring an equitable balance of sustainable development goals. As such, the game effectively empowers a group of decision-makers to reach a fair consensus by mathematically simulating many rounds of trade-offs between their decisions, with different levels of interest or control over various types of investments. Our proposed gamified design process encompasses decision-making about the most idiosyncratic aspects of a site related to its heritage status and cultural significance to the physical aspects such as balancing access to sunlight and the right to sunlight of the neighbours of the site, ensuring coherence of the entire configuration with regards to a network of desired closeness ratings, the satisfaction of a programme of requirements, and intricately balancing individual development goals in conjunction with communal goals and environmental design codes. The game is developed fully based on an algebraic computational process on our own digital twinning platform, using open geospatial data and open-source computational tools such as NumPy. The mathematical process consists of a Markovian design machine for balancing the decisions of actors, a massing configurator equipped with Fuzzy Logic and Multi-Criteria Decision Analysis, algebraic graph-theoretical accessibility evaluators, and automated solar-climatic evaluators using geospatial computational geometry.

Scientific cooperation on an international level has been well studied in the literature. However, much less is known about this cooperation on the intercontinental level. In this paper, we address this issue by creating a collection of approximately 13.8 million publications around the papers by one of the highly cited author working in complex networks and their applications. The obtained rank-frequency distribution of the probability of sequences describing continents and number of countries -- with which authors of papers are affiliated -- follows the power law with an exponent $-1.9108(15)$. Such a dependence is known in the literature as Zipf's law and it has been originally observed in linguistics, later it turned out that it is very commonly observed in various fields. The number of distinct ``continent (number of countries)'' sequences in a function of the number of analyzed papers grows according to power law with exponent $0.527(14)$, i.e. it follows Heap's law.

We propose SnCQA, a set of hardware-efficient variational circuits of equivariant quantum convolutional circuits respective to permutation symmetries and spatial lattice symmetries with the number of qubits $n$. By exploiting permutation symmetries of the system, such as lattice Hamiltonians common to many quantum many-body and quantum chemistry problems, Our quantum neural networks are suitable for solving machine learning problems where permutation symmetries are present, which could lead to significant savings of computational costs. Aside from its theoretical novelty, we find our simulations perform well in practical instances of learning ground states in quantum computational chemistry, where we could achieve comparable performances to traditional methods with few tens of parameters. Compared to other traditional variational quantum circuits, such as the pure hardware-efficient ansatz (pHEA), we show that SnCQA is more scalable, accurate, and noise resilient (with $20\times$ better performance on $3 \times 4$ square lattice and $200\% - 1000\%$ resource savings in various lattice sizes and key criterions such as the number of layers, parameters, and times to converge in our cases), suggesting a potentially favorable experiment on near-time quantum devices.

Increasing evidence shows that flaws in machine learning (ML) algorithm validation are an underestimated global problem. Particularly in automatic biomedical image analysis, chosen performance metrics often do not reflect the domain interest, thus failing to adequately measure scientific progress and hindering translation of ML techniques into practice. To overcome this, our large international expert consortium created Metrics Reloaded, a comprehensive framework guiding researchers in the problem-aware selection of metrics. Following the convergence of ML methodology across application domains, Metrics Reloaded fosters the convergence of validation methodology. The framework was developed in a multi-stage Delphi process and is based on the novel concept of a problem fingerprint - a structured representation of the given problem that captures all aspects that are relevant for metric selection, from the domain interest to the properties of the target structure(s), data set and algorithm output. Based on the problem fingerprint, users are guided through the process of choosing and applying appropriate validation metrics while being made aware of potential pitfalls. Metrics Reloaded targets image analysis problems that can be interpreted as a classification task at image, object or pixel level, namely image-level classification, object detection, semantic segmentation, and instance segmentation tasks. To improve the user experience, we implemented the framework in the Metrics Reloaded online tool, which also provides a point of access to explore weaknesses, strengths and specific recommendations for the most common validation metrics. The broad applicability of our framework across domains is demonstrated by an instantiation for various biological and medical image analysis use cases.

北京阿比特科技有限公司