亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Even though existence of non-convergent evolution of the states of populations in ecological and evolutionary contexts is an undeniable fact, insightful game-theoretic interpretations of such outcomes are scarce in the literature of evolutionary game theory. As a proof-of-concept, we tap into the information-theoretic concept of relative entropy in order to construct a game-theoretic interpretation for periodic orbits in a wide class of deterministic discrete-time evolutionary game dynamics, primarily investigating the two-player two-strategy case. Effectively, we present a consistent generalization of the evolutionarily stable strategy -- the cornerstone of the evolutionary game theory -- and aptly term the generalized concept: information stable orbit. The information stable orbit captures the essence of the evolutionarily stable strategy in that it compares the total payoff obtained against an evolving mutant with the total payoff that the mutant gets while playing against itself. Furthermore, we discuss the connection of the information stable orbit with the dynamical stability of the corresponding periodic orbit.

相關內容

《計算機信息》雜志發表高質量的論文,擴大了運籌學和計算的范圍,尋求有關理論、方法、實驗、系統和應用方面的原創研究論文、新穎的調查和教程論文,以及描述新的和有用的軟件工具的論文。官網鏈接: · 機器人 · 規范化的 · 可理解性 · SimPLe ·
2023 年 7 月 6 日

Traditionally, robots are regarded as universal motion generation machines. They are designed mainly by kinematics considerations while the desired dynamics is imposed by strong actuators and high-rate control loops. As an alternative, one can first consider the robot's intrinsic dynamics and optimize it in accordance with the desired tasks. Therefore, one needs to better understand intrinsic, uncontrolled dynamics of robotic systems. In this paper we focus on periodic orbits, as fundamental dynamic properties with many practical applications. Algebraic topology and differential geometry provide some fundamental statements about existence of periodic orbits. As an example, we present periodic orbits of the simplest multi-body system: the double-pendulum in gravity. This simple system already displays a rich variety of periodic orbits. We classify these into three classes: toroidal orbits, disk orbits and nonlinear normal modes. Some of these we found by geometrical insights and some by numerical simulation and sampling.

Learning with rejection is a prototypical model for studying the interaction between humans and AI on prediction tasks. The model has two components, a predictor and a rejector. Upon the arrival of a sample, the rejector first decides whether to accept it; if accepted, the predictor fulfills the prediction task, and if rejected, the prediction will be deferred to humans. The learning problem requires learning a predictor and a rejector simultaneously. This changes the structure of the conventional loss function and often results in non-convexity and inconsistency issues. For the classification with rejection problem, several works develop surrogate losses for the jointly learning with provable consistency guarantees; in parallel, there has been less work for the regression counterpart. We study the regression with rejection (RwR) problem and investigate the no-rejection learning strategy which treats the RwR problem as a standard regression task to learn the predictor. We establish that the suboptimality of the no-rejection learning strategy observed in the literature can be mitigated by enlarging the function class of the predictor. Then we introduce the truncated loss to single out the learning for the predictor and we show that a consistent surrogate property can be established for the predictor individually in an easier way than for the predictor and the rejector jointly. Our findings advocate for a two-step learning procedure that first uses all the data to learn the predictor and then calibrates the prediction loss for the rejector. It is better aligned with the common intuition that more data samples will lead to a better predictor and it calls for more efforts on a better design of calibration algorithms for learning the rejector. While our discussions mainly focus on the regression problem, the theoretical results and insights generalize to the classification problem as well.

This paper presents an algorithm for the preprocessing of observation data aimed at improving the robustness of orbit determination tools. Two objectives are fulfilled: obtain a refined solution to the initial orbit determination problem and detect possible outliers in the processed measurements. The uncertainty on the initial estimate is propagated forward in time and progressively reduced by exploiting sensor data available in said propagation window. Differential algebra techniques and a novel automatic domain splitting algorithm for second-order Taylor expansions are used to efficiently propagate uncertainties over time. A multifidelity approach is employed to minimize the computational effort while retaining the accuracy of the propagated estimate. At each observation epoch, a polynomial map is obtained by projecting the propagated states onto the observable space. Domains that do no overlap with the actual measurement are pruned thus reducing the uncertainty to be further propagated. Measurement outliers are also detected in this step. The refined estimate and retained observations are then used to improve the robustness of batch orbit determination tools. The effectiveness of the algorithm is demonstrated for a geostationary transfer orbit object using synthetic and real observation data from the TAROT network.

To improve the convergence property of the randomized Kaczmarz (RK) method for solving linear systems, Bai and Wu (SIAM J. Sci. Comput., 40(1):A592--A606, 2018) originally introduced a greedy probability criterion for effectively selecting the working row from the coefficient matrix and constructed the greedy randomized Kaczmarz (GRK) method. Due to its simplicity and efficiency, this approach has inspired numerous subsequent works in recent years, such as the capped adaptive sampling rule, the greedy augmented randomized Kaczmarz method, and the greedy randomized coordinate descent method. Since the iterates of the GRK method are actually random variables, existing convergence analyses are all related to the expectation of the error. In this note, we prove that the linear convergence rate of the GRK method is deterministic, i.e. not in the sense of expectation. Moreover, the Polyak's heavy ball momentum technique is incorporated to improve the performance of the GRK method. We propose a refined convergence analysis, compared with the technique used in Loizou and Richt\'{a}rik (Comput. Optim. Appl., 77(3):653--710, 2020), of momentum variants of randomized iterative methods, which shows that the proposed GRK method with momentum (mGRK) also enjoys a deterministic linear convergence. Numerical experiments show that the mGRK method is more efficient than the GRK method.

Graph neural networks (GNNs) have become increasingly popular for classification tasks on graph-structured data. Yet, the interplay between graph topology and feature evolution in GNNs is not well understood. In this paper, we focus on node-wise classification, illustrated with community detection on stochastic block model graphs, and explore the feature evolution through the lens of the "Neural Collapse" (NC) phenomenon. When training instance-wise deep classifiers (e.g. for image classification) beyond the zero training error point, NC demonstrates a reduction in the deepest features' within-class variability and an increased alignment of their class means to certain symmetric structures. We start with an empirical study that shows that a decrease in within-class variability is also prevalent in the node-wise classification setting, however, not to the extent observed in the instance-wise case. Then, we theoretically study this distinction. Specifically, we show that even an "optimistic" mathematical model requires that the graphs obey a strict structural condition in order to possess a minimizer with exact collapse. Interestingly, this condition is viable also for heterophilic graphs and relates to recent empirical studies on settings with improved GNNs' generalization. Furthermore, by studying the gradient dynamics of the theoretical model, we provide reasoning for the partial collapse observed empirically. Finally, we present a study on the evolution of within- and between-class feature variability across layers of a well-trained GNN and contrast the behavior with spectral methods.

Discrete chemical reaction networks formalize the interactions of molecular species in a well-mixed solution as stochastic events. Given their basic mathematical and physical role, the computational power of chemical reaction networks has been widely studied in the molecular programming and distributed computing communities. While for Turing-universal systems there is a universal measure of optimal information encoding based on Kolmogorov complexity, chemical reaction networks are not Turing universal unless error and unbounded molecular counts are permitted. Nonetheless, here we show that the optimal number of reactions to generate a specific count $x \in \mathbb{N}$ with probability $1$ is asymptotically equal to a ``space-aware'' version of the Kolmogorov complexity of $x$, defined as $\mathrm{\widetilde{K}s}(x) = \min_p\left\{\lvert p \rvert / \log \lvert p \rvert + \log(\texttt{space}(\mathcal{U}(p))) : \mathcal{U}(p) = x \right\}$, where $p$ is a program for universal Turing machine $\mathcal{U}$. This version of Kolmogorov complexity incorporates not just the length of the shortest program for generating $x$, but also the space usage of that program. Probability $1$ computation is captured by the standard notion of stable computation from distributed computing, but we limit our consideration to chemical reaction networks obeying a stronger constraint: they ``know when they are done'' in the sense that they produce a special species to indicate completion. As part of our results, we develop a module for encoding and unpacking any $b$ bits of information via $O(b/\log{b})$ reactions, which is information-theoretically optimal for incompressible information. Our work provides one answer to the question of how succinctly chemical self-organization can be encoded -- in the sense of generating precise molecular counts of species as the desired state.

Seaweed biomass offers significant potential for climate mitigation, but large-scale, autonomous open-ocean farms are required to fully exploit it. Such farms typically have low propulsion and are heavily influenced by ocean currents. We want to design a controller that maximizes seaweed growth over months by taking advantage of the non-linear time-varying ocean currents for reaching high-growth regions. The complex dynamics and underactuation make this challenging even when the currents are known. This is even harder when only short-term imperfect forecasts with increasing uncertainty are available. We propose a dynamic programming-based method to efficiently solve for the optimal growth value function when true currents are known. We additionally present three extensions when as in reality only forecasts are known: (1) our methods resulting value function can be used as feedback policy to obtain the growth-optimal control for all states and times, allowing closed-loop control equivalent to re-planning at every time step hence mitigating forecast errors, (2) a feedback policy for long-term optimal growth beyond forecast horizons using seasonal average current data as terminal reward, and (3) a discounted finite-time Dynamic Programming (DP) formulation to account for increasing ocean current estimate uncertainty. We evaluate our approach through 30-day simulations of floating seaweed farms in realistic Pacific Ocean current scenarios. Our method demonstrates an achievement of 95.8% of the best possible growth using only 5-day forecasts. This confirms the feasibility of using low-power propulsion and optimal control for enhanced seaweed growth on floating farms under real-world conditions.

Software maintenance is an important part of a software system's life cycle. Maintenance tasks of existing software systems suffer from architecture information that is diverging over time (architectural drift). The Digital Architecture Twin (DArT) can support software maintenance by providing up-to-date architecture information. For this, the DArT gathers such information and co-evolves with a software system, enabling continuous reverse engineering. But the crucial link for stakeholders to retrieve this information is missing. To fill this gap, we contribute the Architecture Information Query Language (AIQL), which enables stakeholders to access up-to-date and tailored architecture information. We derived four application scenarios in the context of continuous reverse engineering. We showed that the AIQL provides the required functionality to formulate queries for the application scenarios and that the language scales for use with real-world software systems. In a user study, stakeholders agreed that the language is easy to understand and assessed its value to the specific stakeholder for the application scenarios.

As artificial intelligence (AI) systems are increasingly embedded in our lives, their presence leads to interactions that shape our behaviour, decision-making, and social interactions. Existing theoretical research has primarily focused on human-to-human interactions, overlooking the unique dynamics triggered by the presence of AI. In this paper, resorting to methods from evolutionary game theory, we study how different forms of AI influence the evolution of cooperation in a human population playing the one-shot Prisoner's Dilemma game in both well-mixed and structured populations. We found that Samaritan AI agents that help everyone unconditionally, including defectors, can promote higher levels of cooperation in humans than Discriminatory AI that only help those considered worthy/cooperative, especially in slow-moving societies where change is viewed with caution or resistance (small intensities of selection). Intuitively, in fast-moving societies (high intensities of selection), Discriminatory AIs promote higher levels of cooperation than Samaritan AIs.

Image segmentation is still an open problem especially when intensities of the interested objects are overlapped due to the presence of intensity inhomogeneity (also known as bias field). To segment images with intensity inhomogeneities, a bias correction embedded level set model is proposed where Inhomogeneities are Estimated by Orthogonal Primary Functions (IEOPF). In the proposed model, the smoothly varying bias is estimated by a linear combination of a given set of orthogonal primary functions. An inhomogeneous intensity clustering energy is then defined and membership functions of the clusters described by the level set function are introduced to rewrite the energy as a data term of the proposed model. Similar to popular level set methods, a regularization term and an arc length term are also included to regularize and smooth the level set function, respectively. The proposed model is then extended to multichannel and multiphase patterns to segment colourful images and images with multiple objects, respectively. It has been extensively tested on both synthetic and real images that are widely used in the literature and public BrainWeb and IBSR datasets. Experimental results and comparison with state-of-the-art methods demonstrate that advantages of the proposed model in terms of bias correction and segmentation accuracy.

北京阿比特科技有限公司