亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The intersection of ground reaction forces near a point above the center of mass has been observed in computer simulation models and human walking experiments. Observed so ubiquitously, the intersection point (IP) is commonly assumed to provide postural stability for bipedal walking. In this study, we challenge this assumption by questioning if walking without an IP is possible. Deriving gaits with a neuromuscular reflex model through multi-stage optimization, we found stable walking patterns that show no signs of the IP-typical intersection of ground reaction forces. The non-IP gaits found are stable and successfully rejected step-down perturbations, which indicates that an IP is not necessary for locomotion robustness or postural stability. A collision-based analysis shows that non-IP gaits feature center of mass (CoM) dynamics with vectors of the CoM velocity and ground reaction force increasingly opposing each other, indicating an increased mechanical cost of transport. Although our computer simulation results have yet to be confirmed through experimental studies, they already indicate that the role of the IP in postural stability should be further investigated. Moreover, our observations on the CoM dynamics and gait efficiency suggest that the IP may have an alternative or additional function that should be considered.

相關內容

Biogenic Volatile Organic Compounds (BVOCs) emitted from the terrestrial ecosystem into the Earth's atmosphere are an important component of atmospheric chemistry. Due to the scarcity of measurement, a reliable enhancement of BVOCs emission maps can aid in providing denser data for atmospheric chemical, climate, and air quality models. In this work, we propose a strategy to super-resolve coarse BVOC emission maps by simultaneously exploiting the contributions of different compounds. To this purpose, we first accurately investigate the spatial inter-connections between several BVOC species. Then, we exploit the found similarities to build a Multi-Image Super-Resolution (MISR) system, in which a number of emission maps associated with diverse compounds are aggregated to boost Super-Resolution (SR) performance. We compare different configurations regarding the species and the number of joined BVOCs. Our experimental results show that incorporating BVOCs' relationship into the process can substantially improve the accuracy of the super-resolved maps. Interestingly, the best results are achieved when we aggregate the emission maps of strongly uncorrelated compounds. This peculiarity seems to confirm what was already guessed for other data-domains, i.e., joined uncorrelated information are more helpful than correlated ones to boost MISR performance. Nonetheless, the proposed work represents the first attempt in SR of BVOC emissions through the fusion of multiple different compounds.

Following the traditional paradigm of convolutional neural networks (CNNs), modern CNNs manage to keep pace with more recent, for example transformer-based, models by not only increasing model depth and width but also the kernel size. This results in large amounts of learnable model parameters that need to be handled during training. While following the convolutional paradigm with the according spatial inductive bias, we question the significance of \emph{learned} convolution filters. In fact, our findings demonstrate that many contemporary CNN architectures can achieve high test accuracies without ever updating randomly initialized (spatial) convolution filters. Instead, simple linear combinations (implemented through efficient $1\times 1$ convolutions) suffice to effectively recombine even random filters into expressive network operators. Furthermore, these combinations of random filters can implicitly regularize the resulting operations, mitigating overfitting and enhancing overall performance and robustness. Conversely, retaining the ability to learn filter updates can impair network performance. Lastly, although we only observe relatively small gains from learning $3\times 3$ convolutions, the learning gains increase proportionally with kernel size, owing to the non-idealities of the independent and identically distributed (\textit{i.i.d.}) nature of default initialization techniques.

This work aims to address the general order manipulation issue in blockchain-based decentralized exchanges (DEX) by exploring the benefits of employing differentially order-fair atomic broadcast (of-ABC) mechanisms for transaction ordering and frequent batch auction (FBA) for execution. In the suggested of-ABC approach, transactions submitted to a sufficient number of blockchain validators are ordered before or along with later transactions. FBA then executes transactions with a uniform price double auction that prioritizes price instead of transaction order within the same committed batch. To demonstrate the effectiveness of our order-but-not-execute-in-order design, we compare the welfare loss and liquidity provision in DEX under FBA and its continuous counterpart, Central Limit Order Book (CLOB). Assuming that the exchange is realized over an of-ABC protocol, we find that FBA achieves better social welfare compared to CLOB when (1) public information affecting the fundamental value of an asset is revealed more frequently than private information, or (2) the block generation interval is sufficiently large, or (3) the priority fees attached to submitted transactions are small compared to the asset price changes. Further, our findings also indicate that (4) liquidity provision is better under FBA when the market is not thin, meaning that a higher number of transactions are submitted by investors and traders in a block, or (5) when fewer privately informed traders are present. Overall, in the settings mentioned above, the adoption of FBA and of-ABC mechanisms in DEX demonstrates improved performance in terms of social welfare and liquidity provision compared to the continuous CLOB model.

Motivated by recent advancements in text-to-image diffusion, we study erasure of specific concepts from the model's weights. While Stable Diffusion has shown promise in producing explicit or realistic artwork, it has raised concerns regarding its potential for misuse. We propose a fine-tuning method that can erase a visual concept from a pre-trained diffusion model, given only the name of the style and using negative guidance as a teacher. We benchmark our method against previous approaches that remove sexually explicit content and demonstrate its effectiveness, performing on par with Safe Latent Diffusion and censored training. To evaluate artistic style removal, we conduct experiments erasing five modern artists from the network and conduct a user study to assess the human perception of the removed styles. Unlike previous methods, our approach can remove concepts from a diffusion model permanently rather than modifying the output at the inference time, so it cannot be circumvented even if a user has access to model weights. Our code, data, and results are available at //erasing.baulab.info/

Quantum key distribution provides a promising solution for sharing secure keys between two distant parties with unconditional security. Nevertheless, quantum key distribution is still severely threatened by the imperfections of devices. In particular, the classical pulse correlation threatens security when sending decoy states. To address this problem and simplify experimental requirements, we propose a phase-matching quantum key distribution protocol without intensity modulation. Instead of using decoy states, we propose a novel method to estimate the theoretical upper bound on the phase error rate contributed by even-photon-number components. Simulation results show that the transmission distance of our protocol could reach 305 km in telecommunication fiber. Furthermore, we perform a proof-of-principle experiment to demonstrate the feasibility of our protocol, and the key rate reaches 22.5 bps under a 45 dB channel loss. Addressing the security loophole of pulse intensity correlation and replacing continuous random phase with 6 or 8 slices random phase, our protocol provides a promising solution for constructing quantum networks.

This research aims at building a multivariate statistical model for assessing users' perceptions of acceptance of ride-sharing services in Dhaka City. A structured questionnaire is developed based on the users' reported attitudes and perceived risks. A total of 350 normally distributed responses are collected from ride-sharing service users and stakeholders of Dhaka City. Respondents are interviewed to express their experience and opinions on ride-sharing services through the stated preference questionnaire. Structural Equation Modeling (SEM) is used to validate the research hypotheses. Statistical parameters and several trials are used to choose the best SEM. The responses are also analyzed using the Relative Importance Index (RII) method, validating the chosen SEM. Inside SEM, the quality of ride-sharing services is measured by two latent and eighteen observed variables. The latent variable 'safety & security' is more influential than 'service performance' on the overall quality of service index. Under 'safety & security' the other two variables, i.e., 'account information' and 'personal information' are found to be the most significant that impact the decision to share rides with others. In addition, 'risk of conflict' and 'possibility of accident' are identified using the perception model as the lowest contributing variables. Factor analysis reveals the suitability and reliability of the proposed SEM. Identifying the influential parameters in this will help the service providers understand and improve the quality of ride-sharing service for users.

The reuse of research software is central to research efficiency and academic exchange. The application of software enables researchers with varied backgrounds to reproduce, validate, and expand upon study findings. Furthermore, the analysis of open source code aids in the comprehension, comparison, and integration of approaches. Often, however, no further use occurs because relevant software cannot be found or is incompatible with existing research processes. This results in repetitive software development, which impedes the advancement of individual researchers and entire research communities. In this article, the DataDesc ecosystem is presented, an approach to describing data models of software interfaces with detailed and machine-actionable metadata. In addition to a specialized metadata schema, an exchange format and support tools for easy collection and the automated publishing of software documentation are introduced. This approach practically increases the FAIRness, i.e., findability, accessibility, interoperability, and so the reusability of research software, as well as effectively promotes its impact on research.

We introduce variational sequential Optimal Experimental Design (vsOED), a new method for optimally designing a finite sequence of experiments under a Bayesian framework and with information-gain utilities. Specifically, we adopt a lower bound estimator for the expected utility through variational approximation to the Bayesian posteriors. The optimal design policy is solved numerically by simultaneously maximizing the variational lower bound and performing policy gradient updates. We demonstrate this general methodology for a range of OED problems targeting parameter inference, model discrimination, and goal-oriented prediction. These cases encompass explicit and implicit likelihoods, nuisance parameters, and physics-based partial differential equation models. Our vsOED results indicate substantially improved sample efficiency and reduced number of forward model simulations compared to previous sequential design algorithms.

Few sample learning (FSL) is significant and challenging in the field of machine learning. The capability of learning and generalizing from very few samples successfully is a noticeable demarcation separating artificial intelligence and human intelligence since humans can readily establish their cognition to novelty from just a single or a handful of examples whereas machine learning algorithms typically entail hundreds or thousands of supervised samples to guarantee generalization ability. Despite the long history dated back to the early 2000s and the widespread attention in recent years with booming deep learning technologies, little surveys or reviews for FSL are available until now. In this context, we extensively review 200+ papers of FSL spanning from the 2000s to 2019 and provide a timely and comprehensive survey for FSL. In this survey, we review the evolution history as well as the current progress on FSL, categorize FSL approaches into the generative model based and discriminative model based kinds in principle, and emphasize particularly on the meta learning based FSL approaches. We also summarize several recently emerging extensional topics of FSL and review the latest advances on these topics. Furthermore, we highlight the important FSL applications covering many research hotspots in computer vision, natural language processing, audio and speech, reinforcement learning and robotic, data analysis, etc. Finally, we conclude the survey with a discussion on promising trends in the hope of providing guidance and insights to follow-up researches.

A core capability of intelligent systems is the ability to quickly learn new tasks by drawing on prior experience. Gradient (or optimization) based meta-learning has recently emerged as an effective approach for few-shot learning. In this formulation, meta-parameters are learned in the outer loop, while task-specific models are learned in the inner-loop, by using only a small amount of data from the current task. A key challenge in scaling these approaches is the need to differentiate through the inner loop learning process, which can impose considerable computational and memory burdens. By drawing upon implicit differentiation, we develop the implicit MAML algorithm, which depends only on the solution to the inner level optimization and not the path taken by the inner loop optimizer. This effectively decouples the meta-gradient computation from the choice of inner loop optimizer. As a result, our approach is agnostic to the choice of inner loop optimizer and can gracefully handle many gradient steps without vanishing gradients or memory constraints. Theoretically, we prove that implicit MAML can compute accurate meta-gradients with a memory footprint that is, up to small constant factors, no more than that which is required to compute a single inner loop gradient and at no overall increase in the total computational cost. Experimentally, we show that these benefits of implicit MAML translate into empirical gains on few-shot image recognition benchmarks.

北京阿比特科技有限公司