亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, with the goal of enhancing the minimally invasive spinal fixation procedure in osteoporotic patients, we propose a first-of-its-kind image-guided robotic framework for performing an autonomous and patient-specific procedure using a unique concentric tube steerable drilling robot (CT-SDR). Particularly, leveraging a CT-SDR, we introduce the concept of J-shape drilling based on a pre-operative trajectory planned in CT scan of a patient followed by appropriate calibration, registration, and navigation steps to safely execute this trajectory in real-time using our unique robotic setup. To thoroughly evaluate the performance of our framework, we performed several experiments on two different vertebral phantoms designed based on CT scan of real patients.

相關內容

In this paper, we present the first constant-approximation algorithm for {\em budgeted sweep coverage problem} (BSC). The BSC involves designing routes for a number of mobile sensors (a.k.a. robots) to periodically collect information as much as possible from points of interest (PoIs). To approach this problem, we propose to first examine the {\em multi-orienteering problem} (MOP). The MOP aims to find a set of $m$ vertex-disjoint paths that cover as many vertices as possible while adhering to a budget constraint $B$. We develop a constant-approximation algorithm for MOP and utilize it to achieve a constant-approximation for BSC. Our findings open new possibilities for optimizing mobile sensor deployments and related combinatorial optimization tasks.

There has been a lot of recent research on improving the efficiency of fine-tuning foundation models. In this paper, we propose a novel efficient fine-tuning method that allows the input image size of Segment Anything Model (SAM) to be variable. SAM is a powerful foundational model for image segmentation trained on huge datasets, but it requires fine-tuning to recognize arbitrary classes. The input image size of SAM is fixed at 1024 x 1024, resulting in substantial computational demands during training. Furthermore, the fixed input image size may result in the loss of image information, e.g. due to fixed aspect ratios. To address this problem, we propose Generalized SAM (GSAM). Different from the previous methods, GSAM is the first to apply random cropping during training with SAM, thereby significantly reducing the computational cost of training. Experiments on datasets of various types and various pixel counts have shown that GSAM can train more efficiently than SAM and other fine-tuning methods for SAM, achieving comparable or higher accuracy.

In this paper, we introduce the signed barcode, a new visual representation of the global structure of the rank invariant of a multi-parameter persistence module or, more generally, of a poset representation. Like its unsigned counterpart in one-parameter persistence, the signed barcode decomposes the rank invariant as a $\Z$-linear combination of rank invariants of indicator modules supported on segments in the poset. We develop the theory behind these decompositions, both for the usual rank invariant and for its generalizations, showing under what conditions they exist and are unique. We also show that, like its unsigned counterpart, the signed barcode reflects in part the algebraic structure of the module: specifically, it derives from the terms in the minimal rank-exact resolution of the module, i.e., its minimal projective resolution relative to the class of short exact sequences on which the rank invariant is additive. To complete the picture, we show some experimental results that illustrate the contribution of the signed barcode in the exploration of multi-parameter persistence modules.

In this paper, we employ a Bayesian approach to assess the reliability of a critical component in the Mars Sample Return program, focusing on the Earth Entry System's risk of containment not assured upon reentry. Our study uses Gaussian Process modeling under a Bayesian regime to analyze the Earth Entry System's resilience against operational stress. This Bayesian framework allows for a detailed probabilistic evaluation of the risk of containment not assured, indicating the feasibility of meeting the mission's stringent safety goal of 0.999999 probability of success. The findings underscore the effectiveness of Bayesian methods for complex uncertainty quantification analyses of computer simulations, providing valuable insights for computational reliability analysis in a risk-averse setting.

In this paper, we propose a novel method for detecting DeepFakes, enhancing the generalization of detection through semantic decoupling. There are now multiple DeepFake forgery technologies that not only possess unique forgery semantics but may also share common forgery semantics. The unique forgery semantics and irrelevant content semantics may promote over-fitting and hamper generalization for DeepFake detectors. For our proposed method, after decoupling, the common forgery semantics could be extracted from DeepFakes, and subsequently be employed for developing the generalizability of DeepFake detectors. Also, to pursue additional generalizability, we designed an adaptive high-pass module and a two-stage training strategy to improve the independence of decoupled semantics. Evaluation on FF++, Celeb-DF, DFD, and DFDC datasets showcases our method's excellent detection and generalization performance. Code is available at: //github.com/leaffeall/DFS-GDD.

In this work, we explore the intersection of sparse coding theory and deep learning to enhance our understanding of feature extraction capabilities in advanced neural network architectures. We begin by introducing a novel class of Deep Sparse Coding (DSC) models and establish a thorough theoretical analysis of their uniqueness and stability properties. By applying iterative algorithms to these DSC models, we derive convergence rates for convolutional neural networks (CNNs) in their ability to extract sparse features. This provides a strong theoretical foundation for the use of CNNs in sparse feature learning tasks. We additionally extend this convergence analysis to more general neural network architectures, including those with diverse activation functions, as well as self-attention and transformer-based models. This broadens the applicability of our findings to a wide range of deep learning methods for deep sparse feature extraction. Inspired by the strong connection between sparse coding and CNNs, we also explore training strategies to encourage neural networks to learn more sparse features. Through numerical experiments, we demonstrate the effectiveness of these approaches, providing valuable insights for the design of efficient and interpretable deep learning models.

In this paper, we propose a data-driven method to learn interpretable topological features of biomolecular data and demonstrate the efficacy of parsimonious models trained on topological features in predicting the stability of synthetic mini proteins. We compare models that leverage automatically-learned structural features against models trained on a large set of biophysical features determined by subject-matter experts (SME). Our models, based only on topological features of the protein structures, achieved 92%-99% of the performance of SME-based models in terms of the average precision score. By interrogating model performance and feature importance metrics, we extract numerous insights that uncover high correlations between topological features and SME features. We further showcase how combining topological features and SME features can lead to improved model performance over either feature set used in isolation, suggesting that, in some settings, topological features may provide new discriminating information not captured in existing SME features that are useful for protein stability prediction.

The main goal of this paper is to propose a new quaternion total variation regularization model for solving linear ill-posed quaternion inverse problems, which arise from three-dimensional signal filtering or color image processing. The quaternion total variation term in the model is represented by collaborative total variation regularization and approximated by a quaternion iteratively reweighted norm. A novel flexible quaternion generalized minimal residual method is presented to quickly solve this model. An improved convergence theory is established to obtain a sharp upper bound of the residual norm of quaternion minimal residual method (QGMRES). The convergence theory is also presented for preconditioned QGMRES. Numerical experiments indicate the superiority of the proposed model and algorithms over the state-of-the-art methods in terms of iteration steps, CPU time, and the quality criteria of restored color images.

In this paper, we introduce a low-cost and low-power tiny supervised on-device learning (ODL) core that can address the distributional shift of input data for human activity recognition. Although ODL for resource-limited edge devices has been studied recently, how exactly to provide the training labels to these devices at runtime remains an open-issue. To address this problem, we propose to combine an automatic data pruning with supervised ODL to reduce the number queries needed to acquire predicted labels from a nearby teacher device and thus save power consumption during model retraining. The data pruning threshold is automatically tuned, eliminating a manual threshold tuning. As a tinyML solution at a few mW for the human activity recognition, we design a supervised ODL core that supports our automatic data pruning using a 45nm CMOS process technology. We show that the required memory size for the core is smaller than the same-shaped multilayer perceptron (MLP) and the power consumption is only 3.39mW. Experiments using a human activity recognition dataset show that the proposed automatic data pruning reduces the communication volume by 55.7% and power consumption accordingly with only 0.9% accuracy loss.

In this paper, we introduce the Reinforced Mnemonic Reader for machine reading comprehension tasks, which enhances previous attentive readers in two aspects. First, a reattention mechanism is proposed to refine current attentions by directly accessing to past attentions that are temporally memorized in a multi-round alignment architecture, so as to avoid the problems of attention redundancy and attention deficiency. Second, a new optimization approach, called dynamic-critical reinforcement learning, is introduced to extend the standard supervised method. It always encourages to predict a more acceptable answer so as to address the convergence suppression problem occurred in traditional reinforcement learning algorithms. Extensive experiments on the Stanford Question Answering Dataset (SQuAD) show that our model achieves state-of-the-art results. Meanwhile, our model outperforms previous systems by over 6% in terms of both Exact Match and F1 metrics on two adversarial SQuAD datasets.

北京阿比特科技有限公司