亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Crocodiles, known as one of the oldest and most resilient species on Earth, have demonstrated remarkable locomotor abilities both on land and in water, evolving over millennia to adapt to diverse environments. In this paper, we draw inspiration from crocodiles and introduce a highly biomimetic crocodile robot equipped with multiple degrees of freedom and articulated trunk joints. This design is based on a comprehensive analysis of the structural and motion characteristics observed in real crocodiles. The bionic crocodile robot has the problem of limb-torso incoordination during movement, in order to solve this problem, we apply the D-H method for both forward and inverse kinematics analysis of the robot's legs and spine. Through a series of simulation experiments, we investigate the robot's stability of motion, fault tolerance, and adaptability to the environment in two motor pattern: with and without the involvement of the spine and tail in its movements. Experiment results demonstrate that the bionic crocodile robot exhibits superior motion performance when the spine and tail cooperate with the extremities. This research not only showcases the potential of biomimicry in robotics but also underscores the significance of understanding how nature's designs can inform and enhance our technological innovations.

相關內容

There are several tools available to infer phylogenetic trees, which depict the evolutionary relationships among biological entities such as viral and bacterial strains in infectious outbreaks, or cancerous cells in tumor progression trees. These tools rely on several inference methods available to produce phylogenetic trees, with resulting trees not being unique. Thus, methods for comparing phylogenies that are capable of revealing where two phylogenetic trees agree or differ are required. An approach is then to compute a similarity or dissimilarity measure between trees, with the Robinson- Foulds distance being one of the most used, and which can be computed in linear time and space. Nevertheless, given the large and increasing volume of phylogenetic data, phylogenetic trees are becoming very large with hundreds of thousands of leafs. In this context, space requirements become an issue both while computing tree distances and while storing trees. We propose then an efficient implementation of the Robinson-Foulds distance over trees succinct representations. Our implementation generalizes also the Robinson-Foulds distances to labelled phylogenetic trees, i.e., trees containing labels on all nodes, instead of only on leaves. Experimental results show that we are able to still achieve linear time while requiring less space. Our implementation is available as an open-source tool at //github.com/pedroparedesbranco/TreeDiff.

Stabbing Planes (also known as Branch and Cut) is a proof system introduced very recently which, informally speaking, extends the DPLL method by branching on integer linear inequalities instead of single variables. The techniques known so far to prove size and depth lower bounds for Stabbing Planes are generalizations of those used for the Cutting Planes proof system. For size lower bounds these are established by monotone circuit arguments, while for depth these are found via communication complexity and protection. As such these bounds apply for lifted versions of combinatorial statements. Rank lower bounds for Cutting Planes are also obtained by geometric arguments called protection lemmas. In this work we introduce two new geometric approaches to prove size/depth lower bounds in Stabbing Planes working for any formula: (1) the antichain method, relying on Sperner's Theorem and (2) the covering method which uses results on essential coverings of the boolean cube by linear polynomials, which in turn relies on Alon's combinatorial Nullenstellensatz. We demonstrate their use on classes of combinatorial principles such as the Pigeonhole principle, the Tseitin contradictions and the Linear Ordering Principle. By the first method we prove almost linear size lower bounds and optimal logarithmic depth lower bounds for the Pigeonhole principle and analogous lower bounds for the Tseitin contradictions over the complete graph and for the Linear Ordering Principle. By the covering method we obtain a superlinear size lower bound and a logarithmic depth lower bound for Stabbing Planes proof of Tseitin contradictions over a grid graph.

Interactions between genes and environmental factors may play a key role in the etiology of many common disorders. Several regularized generalized linear models (GLMs) have been proposed for hierarchical selection of gene by environment interaction (GEI) effects, where a GEI effect is selected only if the corresponding genetic main effect is also selected in the model. However, none of these methods allow to include random effects to account for population structure, subject relatedness and shared environmental exposure. In this paper, we develop a unified approach based on regularized penalized quasi-likelihood (PQL) estimation to perform hierarchical selection of GEI effects in sparse regularized mixed models. We compare the selection and prediction accuracy of our proposed model with existing methods through simulations under the presence of population structure and shared environmental exposure. We show that for all simulation scenarios, compared to other penalized methods, our proposed method enforced sparsity by controlling the number of false positives in the model while having the best predictive performance. Finally, we apply our method to a real data application using the Orofacial Pain: Prospective Evaluation and Risk Assessment (OPPERA) study, and found that our method retrieves previously reported significant loci.

Enabling humans and robots to collaborate effectively requires purposeful communication and an understanding of each other's affordances. Prior work in human-robot collaboration has incorporated knowledge of human affordances, i.e., their action possibilities in the current context, into autonomous robot decision-making. This "affordance awareness" is especially promising for service robots that need to know when and how to assist a person that cannot independently complete a task. However, robots still fall short in performing many common tasks autonomously. In this work-in-progress paper, we propose an augmented reality (AR) framework that bridges the gap in an assistive robot's capabilities by actively engaging with a human through a shared affordance-awareness representation. Leveraging the different perspectives from a human wearing an AR headset and a robot's equipped sensors, we can build a perceptual representation of the shared environment and model regions of respective agent affordances. The AR interface can also allow both agents to communicate affordances with one another, as well as prompt for assistance when attempting to perform an action outside their affordance region. This paper presents the main components of the proposed framework and discusses its potential through a domestic cleaning task experiment.

Traffic accidents, being a significant contributor to both human casualties and property damage, have long been a focal point of research for many scholars in the field of traffic safety. However, previous studies, whether focusing on static environmental assessments or dynamic driving analyses, as well as pre-accident predictions or post-accident rule analyses, have typically been conducted in isolation. There has been a lack of an effective framework for developing a comprehensive understanding and application of traffic safety. To address this gap, this paper introduces AccidentGPT, a comprehensive accident analysis and prevention multi-modal large model. AccidentGPT establishes a multi-modal information interaction framework grounded in multi-sensor perception, thereby enabling a holistic approach to accident analysis and prevention in the field of traffic safety. Specifically, our capabilities can be categorized as follows: for autonomous driving vehicles, we provide comprehensive environmental perception and understanding to control the vehicle and avoid collisions. For human-driven vehicles, we offer proactive long-range safety warnings and blind-spot alerts while also providing safety driving recommendations and behavioral norms through human-machine dialogue and interaction. Additionally, for traffic police and management agencies, our framework supports intelligent and real-time analysis of traffic safety, encompassing pedestrian, vehicles, roads, and the environment through collaborative perception from multiple vehicles and road testing devices. The system is also capable of providing a thorough analysis of accident causes and liability after vehicle collisions. Our framework stands as the first large model to integrate comprehensive scene understanding into traffic safety studies.

The aim of this article is to investigate the well-posedness, stability and convergence of solutions to the time-dependent Maxwell's equations for electric field in conductive media in continuous and discrete settings. The situation we consider would represent a physical problem where a subdomain is emerged in a homogeneous medium, characterized by constant dielectric permittivity and conductivity functions. It is well known that in these homogeneous regions the solution to the Maxwell's equations also solves the wave equation which makes calculations very efficient. In this way our problem can be considered as a coupling problem for which we derive stability and convergence analysis. A number of numerical examples validate theoretical convergence rates of the proposed stabilized explicit finite element scheme.

In this work, we study the problem of stability of Graph Convolutional Neural Networks (GCNs) under random small perturbations in the underlying graph topology, i.e. under a limited number of insertions or deletions of edges. We derive a novel bound on the expected difference between the outputs of unperturbed and perturbed GCNs. The proposed bound explicitly depends on the magnitude of the perturbation of the eigenpairs of the Laplacian matrix, and the perturbation explicitly depends on which edges are inserted or deleted. Then, we provide a quantitative characterization of the effect of perturbing specific edges on the stability of the network. We leverage tools from small perturbation analysis to express the bounds in closed, albeit approximate, form, in order to enhance interpretability of the results, without the need to compute any perturbed shift operator. Finally, we numerically evaluate the effectiveness of the proposed bound.

In this contribution, we derive a consistent variational formulation for computational homogenization methods and show that traditional FE2 and IGA2 approaches are special discretization and solution techniques of this most general framework. This allows us to enhance dramatically the numerical analysis as well as the solution of the arising algebraic system. In particular, we expand the dimension of the continuous system, discretize the higher dimensional problem consistently and apply afterwards a discrete null-space matrix to remove the additional dimensions. A benchmark problem, for which we can obtain an analytical solution, demonstrates the superiority of the chosen approach aiming to reduce the immense computational costs of traditional FE2 and IGA2 formulations to a fraction of the original requirements. Finally, we demonstrate a further reduction of the computational costs for the solution of general non-linear problems.

Coordinate exchange (CEXCH) is a popular algorithm for generating exact optimal experimental designs. The authors of CEXCH advocated for a highly greedy implementation - one that exchanges and optimizes single element coordinates of the design matrix. We revisit the effect of greediness on CEXCHs efficacy for generating highly efficient designs. We implement the single-element CEXCH (most greedy), a design-row (medium greedy) optimization exchange, and particle swarm optimization (PSO; least greedy) on 21 exact response surface design scenarios, under the $D$- and $I-$criterion, which have well-known optimal designs that have been reproduced by several researchers. We found essentially no difference in performance of the most greedy CEXCH and the medium greedy CEXCH. PSO did exhibit better efficacy for generating $D$-optimal designs, and for most $I$-optimal designs than CEXCH, but not to a strong degree under our parametrization. This work suggests that further investigation of the greediness dimension and its effect on CEXCH efficacy on a wider suite of models and criterion is warranted.

The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.

北京阿比特科技有限公司