亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Preoperative medical imaging is an essential part of surgical planning. The data from medical imaging devices, such as CT and MRI scanners, consist of stacks of 2D images in DICOM format. Conversely, advances in 3D data visualization provide further information by assembling cross-sections into 3D volumetric datasets. As Microsoft unveiled the HoloLens 2 (HL2), which is considered one of the best Mixed Reality (XR) headsets in the market, it promised to enhance visualization in 3D by providing an immersive experience to users. This paper introduces a prototype holographic XR DICOM Viewer for the 3D visualization of DICOM image sets on HL2 for surgical planning. We first developed a standalone graphical C++ engine using the native DirectX11 API and HLSL shaders. Based on that, the prototype further applies the OpenXR API for potential deployment on a wide range of devices from vendors across the XR spectrum. With native access to the device, our prototype unravels the limitation of hardware capabilities on HL2 for 3D volume rendering and interaction. Moreover, smartphones can act as input devices to provide another user interaction method by connecting to our server. In this paper, we present a holographic DICOM viewer for the HoloLens 2 and contribute (i) a prototype that renders the DICOM image stacks in real-time on HL2, (ii) three types of user interactions in XR, and (iii) a preliminary qualitative evaluation of our prototype.

相關內容

Semi-supervised learning (SSL) has achieved great success in leveraging a large amount of unlabeled data to learn a promising classifier. A popular approach is pseudo-labeling that generates pseudo labels only for those unlabeled data with high-confidence predictions. As for the low-confidence ones, existing methods often simply discard them because these unreliable pseudo labels may mislead the model. Nevertheless, we highlight that these data with low-confidence pseudo labels can be still beneficial to the training process. Specifically, although the class with the highest probability in the prediction is unreliable, we can assume that this sample is very unlikely to belong to the classes with the lowest probabilities. In this way, these data can be also very informative if we can effectively exploit these complementary labels, i.e., the classes that a sample does not belong to. Inspired by this, we propose a novel Contrastive Complementary Labeling (CCL) method that constructs a large number of reliable negative pairs based on the complementary labels and adopts contrastive learning to make use of all the unlabeled data. Extensive experiments demonstrate that CCL significantly improves the performance on top of existing methods. More critically, our CCL is particularly effective under the label-scarce settings. For example, we yield an improvement of 2.43% over FixMatch on CIFAR-10 only with 40 labeled data.

Large language models have demonstrated outstanding performance on a wide range of tasks such as question answering and code generation. On a high level, given an input, a language model can be used to automatically complete the sequence in a statistically-likely way. Based on this, users prompt these models with language instructions or examples, to implement a variety of downstream tasks. Advanced prompting methods can even imply interaction between the language model, a user, and external tools such as calculators. However, to obtain state-of-the-art performance or adapt language models for specific tasks, complex task- and model-specific programs have to be implemented, which may still require ad-hoc interaction. Based on this, we present the novel idea of Language Model Programming (LMP). LMP generalizes language model prompting from pure text prompts to an intuitive combination of text prompting and scripting. Additionally, LMP allows constraints to be specified over the language model output. This enables easy adaption to many tasks, while abstracting language model internals and providing high-level semantics. To enable LMP, we implement LMQL (short for Language Model Query Language), which leverages the constraints and control flow from an LMP prompt to generate an efficient inference procedure that minimizes the number of expensive calls to the underlying language model. We show that LMQL can capture a wide range of state-of-the-art prompting methods in an intuitive way, especially facilitating interactive flows that are challenging to implement with existing high-level APIs. Our evaluation shows that we retain or increase the accuracy on several downstream tasks, while also significantly reducing the required amount of computation or cost in the case of pay-to-use APIs (13-85% cost savings).

The U.S. Child Welfare System (CWS) is increasingly seeking to emulate business models of the private sector centered in efficiency, cost reduction, and innovation through the adoption of algorithms. These data-driven systems purportedly improve decision-making, however, the public sector poses its own set of challenges with respect to the technical, theoretical, cultural, and societal implications of algorithmic decision-making. To fill these gaps, my dissertation comprises four studies that examine: 1) how caseworkers interact with algorithms in their day-to-day discretionary work, 2) the impact of algorithmic decision-making on the nature of practice, organization, and street-level decision-making, 3) how casenotes can help unpack patterns of invisible labor and contextualize decision-making processes, and 4) how casenotes can help uncover deeper systemic constraints and risk factors that are hard to quantify but directly impact families and street-level decision-making. My goal for this research is to investigate systemic disparities and design and develop algorithmic systems that are centered in the theory of practice and improve the quality of human discretionary work. These studies have provided actionable steps for human-centered algorithm design in the public sector.

Hoist scheduling has become a bottleneck in electroplating industry applications with the development of autonomous devices. Although there are a few approaches proposed to target at the challenging problem, they generally cannot scale to large-scale scheduling problems. In this paper, we formulate the hoist scheduling problem as a new temporal planning problem in the form of adapted PDDL, and propose a novel hierarchical temporal planning approach to efficiently solve the scheduling problem. Additionally, we provide a collection of real-life benchmark instances that can be used to evaluate solution methods for the problem. We exhibit that the proposed approach is able to efficiently find solutions of high quality for large-scale real-life benchmark instances, with comparison to state-of-the-art baselines.

Software implements a significant proportion of functionality in factory automation. Thus, efficient development and the reuse of software parts, so-called units, enhance competitiveness. Thereby, complex control software units are more difficult to understand, leading to increased development, testing and maintenance costs. However, measuring complexity is challenging due to many different, subjective views on the topic. This paper compares different complexity definitions from literature and considers with a qualitative questionnaire study the complexity perception of domain experts, who confirm the importance of objective measures to compare complexity. The paper proposes a set of metrics that measure various classes of software complexity to identify the most complex software units as a prerequisite for refactoring. The metrics include complexity caused by size, data structure, control flow, information flow and lexical structure. Unlike most literature approaches, the metrics are compliant with graphical and textual languages from the IEC 61131-3 standard. Further, a concept for interpreting the metric results is presented. A comprehensive evaluation with industrial software from two German plant manufacturers validates the metrics' suitability to measure complexity.

Assignment mechanisms for many-to-one matching markets with preferences revolve around the key concept of stability. Using school choice as our matching market application, we introduce the problem of jointly allocating a school capacity expansion and finding the best stable allocation for the students in the expanded market. We analyze theoretically the problem, focusing on the trade-off behind the multiplicity of student-optimal assignments, the incentive properties, and the problem's complexity. Due to the impossibility of efficiently solving the problem with classical methods, we generalize existent mathematical programming formulations of stability constraints to our setting, most of which result in integer quadratically-constrained programs. In addition, we propose a novel mixed-integer linear programming formulation that is exponentially-large on the problem size. We show that its stability constraints can be separated in linear time, leading to an effective cutting-plane method. We evaluate the performance of our approaches in a detailed computational study, and we find that our cutting-plane method outperforms mixed-integer programming solvers applied to the formulations obtained by extending existing approaches. We also propose two heuristics that are effective for large instances of the problem. Finally, we use the Chilean school choice system data to demonstrate the impact of capacity planning under stability conditions. Our results show that each additional school seat can benefit multiple students. Moreover, our methodology can prioritize the assignment of previously unassigned students or improve the assignment of several students through improvement chains. These insights empower the decision-maker in tuning the matching algorithm to provide a fair application-oriented solution.

Given ample experimental data from a system governed by differential equations, it is possible to use deep learning techniques to construct the underlying differential operators. In this work we perform symbolic discovery of differential operators in a situation where there is sparse experimental data. This small data regime in machine learning can be made tractable by providing our algorithms with prior information about the underlying dynamics. Physics Informed Neural Networks (PINNs) have been very successful in this regime (reconstructing entire ODE solutions using only a single point or entire PDE solutions with very few measurements of the initial condition). We modify the PINN approach by adding a neural network that learns a representation of unknown hidden terms in the differential equation. The algorithm yields both a surrogate solution to the differential equation and a black-box representation of the hidden terms. These hidden term neural networks can then be converted into symbolic equations using symbolic regression techniques like AI Feynman. In order to achieve convergence of these neural networks, we provide our algorithms with (noisy) measurements of both the initial condition as well as (synthetic) experimental data obtained at later times. We demonstrate strong performance of this approach even when provided with very few measurements of noisy data in both the ODE and PDE regime.

With the explosive growth of information technology, multi-view graph data have become increasingly prevalent and valuable. Most existing multi-view clustering techniques either focus on the scenario of multiple graphs or multi-view attributes. In this paper, we propose a generic framework to cluster multi-view attributed graph data. Specifically, inspired by the success of contrastive learning, we propose multi-view contrastive graph clustering (MCGC) method to learn a consensus graph since the original graph could be noisy or incomplete and is not directly applicable. Our method composes of two key steps: we first filter out the undesirable high-frequency noise while preserving the graph geometric features via graph filtering and obtain a smooth representation of nodes; we then learn a consensus graph regularized by graph contrastive loss. Results on several benchmark datasets show the superiority of our method with respect to state-of-the-art approaches. In particular, our simple approach outperforms existing deep learning-based methods.

Recent advances in 3D fully convolutional networks (FCN) have made it feasible to produce dense voxel-wise predictions of volumetric images. In this work, we show that a multi-class 3D FCN trained on manually labeled CT scans of several anatomical structures (ranging from the large organs to thin vessels) can achieve competitive segmentation results, while avoiding the need for handcrafting features or training class-specific models. To this end, we propose a two-stage, coarse-to-fine approach that will first use a 3D FCN to roughly define a candidate region, which will then be used as input to a second 3D FCN. This reduces the number of voxels the second FCN has to classify to ~10% and allows it to focus on more detailed segmentation of the organs and vessels. We utilize training and validation sets consisting of 331 clinical CT images and test our models on a completely unseen data collection acquired at a different hospital that includes 150 CT scans, targeting three anatomical organs (liver, spleen, and pancreas). In challenging organs such as the pancreas, our cascaded approach improves the mean Dice score from 68.5 to 82.2%, achieving the highest reported average score on this dataset. We compare with a 2D FCN method on a separate dataset of 240 CT scans with 18 classes and achieve a significantly higher performance in small organs and vessels. Furthermore, we explore fine-tuning our models to different datasets. Our experiments illustrate the promise and robustness of current 3D FCN based semantic segmentation of medical images, achieving state-of-the-art results. Our code and trained models are available for download: //github.com/holgerroth/3Dunet_abdomen_cascade.

北京阿比特科技有限公司