亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Parkinson's Disease (PD) is associated with gait movement disorders, such as bradykinesia, stiffness, tremors and postural instability, caused by progressive dopamine deficiency. Today, some approaches have implemented learning representations to quantify kinematic patterns during locomotion, supporting clinical procedures such as diagnosis and treatment planning. These approaches assumes a large amount of stratified and labeled data to optimize discriminative representations. Nonetheless these considerations may restrict the approaches to be operable in real scenarios during clinical practice. This work introduces a self-supervised generative representation to learn gait-motion-related patterns, under the pretext of video reconstruction and an anomaly detection framework. This architecture is trained following a one-class weakly supervised learning to avoid inter-class variance and approach the multiple relationships that represent locomotion. The proposed approach was validated with 14 PD patients and 23 control subjects, and trained with the control population only, achieving an AUC of 95%, homocedasticity level of 70% and shapeness level of 70% in the classification task considering its generalization.

相關內容

Friction drag from a turbulent fluid moving past or inside an object plays a crucial role in domains as diverse as transportation, public utility infrastructure, energy technology, and human health. As a direct measure of the shear-induced friction forces, an accurate prediction of the wall-shear stress can contribute to sustainability, conservation of resources, and carbon neutrality in civil aviation as well as enhanced medical treatment of vascular diseases and cancer. Despite such importance for our modern society, we still lack adequate experimental methods to capture the instantaneous wall-shear stress dynamics. In this contribution, we present a holistic approach that derives velocity and wall-shear stress fields with impressive spatial and temporal resolution from flow measurements using a deep optical flow estimator with physical knowledge. The validity and physical correctness of the derived flow quantities is demonstrated with synthetic and real-world experimental data covering a range of relevant fluid flows.

Radon is a carcinogenic, radioactive gas that can accumulate indoors. Indoor radon exposure at the national scale is usually estimated on the basis of extensive measurement campaigns. However, characteristics of the sample often differ from the characteristics of the population due to the large number of relevant factors such as the availability of geogenic radon or floor level. Furthermore, the sample size usually does not allow exposure estimation with high spatial resolution. We propose a model-based approach that allows a more realistic estimation of indoor radon distribution with a higher spatial resolution than a purely data-based approach. We applied a two-stage modelling approach: 1) a quantile regression forest using environmental and building data as predictors was applied to estimate the probability distribution function of indoor radon for each floor level of each residential building in Germany; (2) a probabilistic Monte Carlo sampling technique enabled the combination and population weighting of floor-level predictions. In this way, the uncertainty of the individual predictions is effectively propagated into the estimate of variability at the aggregated level. The results give an arithmetic mean of 63 Bq/m3, a geometric mean of 41 Bq/m3 and a 95 %ile of 180 Bq/m3. The exceedance probability for 100 Bq/m3 and 300 Bq/m3 are 12.5 % (10.5 million people) and 2.2 % (1.9 million people), respectively. In large cities, individual indoor radon exposure is generally lower than in rural areas, which is a due to the different distribution of the population on floor levels. The advantages of our approach are 1) an accurate exposure estimation even if the survey was not fully representative with respect to the main controlling factors, and 2) an estimate of the exposure distribution with a much higher spatial resolution than basic descriptive statistics.

Histopathology plays a pivotal role in medical diagnostics. In contrast to preparing permanent sections for histopathology, a time-consuming process, preparing frozen sections is significantly faster and can be performed during surgery, where the sample scanning time should be optimized. Super-resolution techniques allow imaging the sample in lower magnification and sparing scanning time. In this paper, we present a new approach to super resolution for histopathological frozen sections, with focus on achieving better distortion measures, rather than pursuing photorealistic images that may compromise critical diagnostic information. Our deep-learning architecture focuses on learning the error between interpolated images and real images, thereby it generates high-resolution images while preserving critical image details, reducing the risk of diagnostic misinterpretation. This is done by leveraging the loss functions in the frequency domain, assigning higher weights to the reconstruction of complex, high-frequency components. In comparison to existing methods, we obtained significant improvements in terms of Structural Similarity Index (SSIM) and Peak Signal-to-Noise Ratio (PSNR), as well as indicated details that lost in the low-resolution frozen-section images, affecting the pathologist's clinical decisions. Our approach has a great potential in providing more-rapid frozen-section imaging, with less scanning, while preserving the high resolution in the imaged sample.

Biological organisms have acquired sophisticated body shapes for walking or climbing through million-year evolutionary processes. In contrast, the components of locomoting soft robots, such as legs and arms, are designed in trial-and-error loops guided by a priori knowledge and experience, which leaves considerable room for improvement. Here, we present optimized soft robots that performed a specific locomotion task without any a priori assumptions or knowledge of the layout and shapes of the limbs by fully exploiting the computational capabilities for topology optimization. The only requirements introduced were a design domain and a periodically acting pneumatic actuator. The freeform shape of a soft body was derived from iterative updates in a gradient-based topology optimization that incorporated complex physical phenomena, such as large deformations, contacts, material viscosity, and fluid-structure interactions, in transient problems. The locomotion tasks included a horizontal movement on flat ground (walking) and a vertical movement between two walls (climbing). Without any human intervention, optimized soft robots have limbs and exhibit locomotion similar to those of biological organisms. Linkage-like structures were formed for the climbing task to assign different movements to multiple legs with limited degrees of freedom in the actuator. We fabricated the optimized design using 3D printing and confirmed the performance of these robots. This study presents a new and efficient strategy for designing soft robots and other bioinspired systems, suggesting that a purely mathematical process can produce shapes reminiscent of nature's long-term evolution.

Engineers are often faced with the decision to select the most appropriate model for simulating the behavior of engineered systems, among a candidate set of models. Experimental monitoring data can generate significant value by supporting engineers toward such decisions. Such data can be leveraged within a Bayesian model updating process, enabling the uncertainty-aware calibration of any candidate model. The model selection task can subsequently be cast into a problem of decision-making under uncertainty, where one seeks to select the model that yields an optimal balance between the reward associated with model precision, in terms of recovering target Quantities of Interest (QoI), and the cost of each model, in terms of complexity and compute time. In this work, we examine the model selection task by means of Bayesian decision theory, under the prism of availability of models of various refinements, and thus varying levels of fidelity. In doing so, we offer an exemplary application of this framework on the IMAC-MVUQ Round-Robin Challenge. Numerical investigations show various outcomes of model selection depending on the target QoI.

Wearing a mask is one of the important measures to prevent infectious diseases. However, it is difficult to detect people's mask-wearing situation in public places with high traffic flow. To address the above problem, this paper proposes a mask-wearing face detection model based on YOLOv5l. Firstly, Multi-Head Attentional Self-Convolution not only improves the convergence speed of the model but also enhances the accuracy of the model detection. Secondly, the introduction of Swin Transformer Block is able to extract more useful feature information, enhance the detection ability of small targets, and improve the overall accuracy of the model. Our designed I-CBAM module can improve target detection accuracy. In addition, using enhanced feature fusion enables the model to better adapt to object detection tasks of different scales. In the experimentation on the MASK dataset, the results show that the model proposed in this paper achieved a 1.1% improvement in mAP(0.5) and a 1.3% improvement in mAP(0.5:0.95) compared to the YOLOv5l model. Our proposed method significantly enhances the detection capability of mask-wearing.

While analogies are a common way to evaluate word embeddings in NLP, it is also of interest to investigate whether or not analogical reasoning is a task in itself that can be learned. In this paper, we test several ways to learn basic analogical reasoning, specifically focusing on analogies that are more typical of what is used to evaluate analogical reasoning in humans than those in commonly used NLP benchmarks. Our experiments find that models are able to learn analogical reasoning, even with a small amount of data. We additionally compare our models to a dataset with a human baseline, and find that after training, models approach human performance.

Suppose a gambler pays one coin per coup to play a two-armed Futurity slot machine, an antique casinos, and two coins are refunded for every two consecutive gambler losses. This payoff is called the Futurity award. The casino owner honestly advertises that each arm on his/her two-armed machine is fair in the sense that the asymptotic expected profit of both gambler and dealer is 0 if the gambler only plays either arm. The gambler is allowed to play either arm on each coup alternatively in some deterministic order or at random. For almost 90 years, since Futurity slot machines is designed in 1936, an open problem that has not been solved for a long time is whether the slot machine will obey the so-called "long bet will lose" phenomenon so common to casino games. Ethier and Lee [Ann. Appl. Proba. 20(2010), pp.1098-1125] conjectured that a player will also definitely lose in the long run by applying any non-random-mixture strategy. In this paper, we shall prove Ethier and Lee's conjecture. Our result with Ethier and Lee's conclusion straightforwardly demonstrates that players decide to use either random or non-random two-arm strategies before playing and then repeated without interruption, the casino owners are always profitable even when the Futurity award is taken into account. The contribution of this work is that it helps complete the demystification of casino profitability. Moreover, it paves the way for casino owners to improve casino game design and for players to participate effectively in gambling.

The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.

Deep learning is usually described as an experiment-driven field under continuous criticizes of lacking theoretical foundations. This problem has been partially fixed by a large volume of literature which has so far not been well organized. This paper reviews and organizes the recent advances in deep learning theory. The literature is categorized in six groups: (1) complexity and capacity-based approaches for analyzing the generalizability of deep learning; (2) stochastic differential equations and their dynamic systems for modelling stochastic gradient descent and its variants, which characterize the optimization and generalization of deep learning, partially inspired by Bayesian inference; (3) the geometrical structures of the loss landscape that drives the trajectories of the dynamic systems; (4) the roles of over-parameterization of deep neural networks from both positive and negative perspectives; (5) theoretical foundations of several special structures in network architectures; and (6) the increasingly intensive concerns in ethics and security and their relationships with generalizability.

北京阿比特科技有限公司