亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present "HoVer-UNet", an approach to distill the knowledge of the multi-branch HoVerNet framework for nuclei instance segmentation and classification in histopathology. We propose a compact, streamlined single UNet network with a Mix Vision Transformer backbone, and equip it with a custom loss function to optimally encode the distilled knowledge of HoVerNet, reducing computational requirements without compromising performances. We show that our model achieved results comparable to HoVerNet on the public PanNuke and Consep datasets with a three-fold reduction in inference time. We make the code of our model publicly available at //github.com/DIAGNijmegen/HoVer-UNet.

相關內容

通過學習、實踐或探索所獲得的認識、判斷或技能。

Time-Triggered Ethernet (TTEthernet) has been widely applied in many scenarios such as industrial internet, automotive electronics, and aerospace, where offline routing and scheduling for TTEthernet has been largely investigated. However, predetermined routes and schedules cannot meet the demands in some agile scenarios, such as smart factories, autonomous driving, and satellite network switching, where the transmission requests join in and leave the network frequently. Thus, we study the online joint routing and scheduling problem for TTEthernet. However, balancing efficient and effective routing and scheduling in an online environment can be quite challenging. To ensure high-quality and fast routing and scheduling, we first design a time-slot expanded graph (TSEG) to model the available resources of TTEthernet over time. The fine-grained representation of TSEG allows us to select a time slot via selecting an edge, thus transforming the scheduling problem into a simple routing problem. Next, we design a dynamic weighting method for each edge in TSEG and further propose an algorithm to co-optimize the routing and scheduling. Our scheme enhances the TTEthernet throughput by co-optimizing the routing and scheduling to eliminate potential conflicts among flow requests, as compared to existing methods. The extensive simulation results show that our scheme runs >400 times faster than standard solutions (i.e., ILP solver), while the gap is only 2% to the optimally scheduled number of flow requests. Besides, as compared to existing schemes, our method can improve the successfully scheduled number of flows by more than 18%.

Advances in survival analysis have facilitated unprecedented flexibility in data modeling, yet there remains a lack of tools for graphically illustrating the influence of continuous covariates on predicted survival outcomes. We propose the utilization of a colored contour plot to depict the predicted survival probabilities over time, and provide a Shiny app and R package as implementations of this tool. Our approach is capable of supporting conventional models, including the Cox and Fine-Gray models. However, its capability shines when coupled with cutting-edge machine learning models such as random survival forests and deep neural networks.

Adversarial generative models, such as Generative Adversarial Networks (GANs), are widely applied for generating various types of data, i.e., images, text, and audio. Accordingly, its promising performance has led to the GAN-based adversarial attack methods in the white-box and black-box attack scenarios. The importance of transferable black-box attacks lies in their ability to be effective across different models and settings, more closely aligning with real-world applications. However, it remains challenging to retain the performance in terms of transferable adversarial examples for such methods. Meanwhile, we observe that some enhanced gradient-based transferable adversarial attack algorithms require prolonged time for adversarial sample generation. Thus, in this work, we propose a novel algorithm named GE-AdvGAN to enhance the transferability of adversarial samples whilst improving the algorithm's efficiency. The main approach is via optimising the training process of the generator parameters. With the functional and characteristic similarity analysis, we introduce a novel gradient editing (GE) mechanism and verify its feasibility in generating transferable samples on various models. Moreover, by exploring the frequency domain information to determine the gradient editing direction, GE-AdvGAN can generate highly transferable adversarial samples while minimizing the execution time in comparison to the state-of-the-art transferable adversarial attack algorithms. The performance of GE-AdvGAN is comprehensively evaluated by large-scale experiments on different datasets, which results demonstrate the superiority of our algorithm. The code for our algorithm is available at: //github.com/LMBTough/GE-advGAN

We present a comprehensive analysis of the implications of artificial latency in the Proposer-Builder Separation framework on the Ethereum network. Focusing on the MEV-Boost auction system, we analyze how strategic latency manipulation affects Maximum Extractable Value yields and network integrity. Our findings reveal both increased profitability for node operators and significant systemic challenges, including heightened network inefficiencies and centralization risks. We empirically validates these insights with a pilot that Chorus One has been operating on Ethereum mainnet. We demonstrate the nuanced effects of latency on bid selection and validator dynamics. Ultimately, this research underscores the need for balanced strategies that optimize Maximum Extractable Value capture while preserving the Ethereum network's decentralization ethos.

Physics-informed neural networks (PINNs) have shown remarkable prospects in the solving the forward and inverse problems involving partial differential equations (PDEs). The method embeds PDEs into the neural network by calculating PDE loss at a series of collocation points, providing advantages such as meshfree and more convenient adaptive sampling. However, when solving PDEs using nonuniform collocation points, PINNs still face challenge regarding inefficient convergence of PDE residuals or even failure. In this work, we first analyze the ill-conditioning of the PDE loss in PINNs under nonuniform collocation points. To address the issue, we define volume-weighted residual and propose volume-weighted physics-informed neural networks (VW-PINNs). Through weighting the PDE residuals by the volume that the collocation points occupy within the computational domain, we embed explicitly the spatial distribution characteristics of collocation points in the residual evaluation. The fast and sufficient convergence of the PDE residuals for the problems involving nonuniform collocation points is guaranteed. Considering the meshfree characteristics of VW-PINNs, we also develop a volume approximation algorithm based on kernel density estimation to calculate the volume of the collocation points. We verify the universality of VW-PINNs by solving the forward problems involving flow over a circular cylinder and flow over the NACA0012 airfoil under different inflow conditions, where conventional PINNs fail; By solving the Burgers' equation, we verify that VW-PINNs can enhance the efficiency of existing the adaptive sampling method in solving the forward problem by 3 times, and can reduce the relative error of conventional PINNs in solving the inverse problem by more than one order of magnitude.

Spatial regression models are central to the field of spatial statistics. Nevertheless, their estimation in case of large and irregular gridded spatial datasets presents considerable computational challenges. To tackle these computational problems, Arbia \citep{arbia_2014_pairwise} introduced a pseudo-likelihood approach (called pairwise likelihood, say PL) which required the identification of pairs of observations that are internally correlated, but mutually conditionally uncorrelated. However, while the PL estimators enjoy optimal theoretical properties, their practical implementation when dealing with data observed on irregular grids suffers from dramatic computational issues (connected with the identification of the pairs of observations) that, in most empirical cases, negatively counter-balance its advantages. In this paper we introduce an algorithm specifically designed to streamline the computation of the PL in large and irregularly gridded spatial datasets, dramatically simplifying the estimation phase. In particular, we focus on the estimation of Spatial Error models (SEM). Our proposed approach, efficiently pairs spatial couples exploiting the KD tree data structure and exploits it to derive the closed-form expressions for fast parameter approximation. To showcase the efficiency of our method, we provide an illustrative example using simulated data, demonstrating the computational advantages if compared to a full likelihood inference are not at the expenses of accuracy.

We establish Bernstein's inequalities for functions of general (general-state-space and possibly non-reversible) Markov chains. These inequalities achieve sharp variance proxies and encompass the classical Bernstein inequality for independent random variables as special cases. The key analysis lies in bounding the operator norm of a perturbed Markov transition kernel by the exponential of sum of two convex functions. One coincides with what delivers the classical Bernstein inequality, and the other reflects the influence of the Markov dependence. A convex analysis on these two functions then derives our Bernstein inequalities. As applications, we apply our Bernstein inequalities to the Markov chain Monte Carlo integral estimation problem and the robust mean estimation problem with Markov-dependent samples, and achieve tight deviation bounds that previous inequalities can not.

Large-amplitude current-driven plasma instabilities, which can transition to the Buneman instability, were observed in one-dimensional (1D) simulations to generate high-energy backstreaming ions. We investigate the saturation of multi-dimensional plasma instabilities and its effects on energetic ion formation. Such ions directly impact spacecraft thruster lifetimes and are associated with magnetic reconnection and cosmic ray inception. An Eulerian Vlasov--Poisson solver employing the grid-based direct kinetic method is used to study the growth and saturation of 2D2V collisionless, electrostatic current-driven instabilities spanning two dimensions each in the configuration (D) and velocity (V) spaces supporting ion and electron phase-space transport. Four stages characterise the electric potential evolution in such instabilities: linear modal growth, harmonic growth, accelerated growth via quasi-linear mechanisms alongside non-linear fill-in, and saturated turbulence. Its transition and isotropisation process bears considerable similarities to the development of hydrodynamic turbulence. While a tendency to isotropy is observed in the plasma waves, followed by electron and then ion phase space after several ion-acoustic periods, the formation of energetic backstreaming ions is more limited in the 2D2V than in the 1D1V simulations. Plasma waves formed by two-dimensional electrostatic kinetic instabilities can propagate in the direction perpendicular to the net electron drift. Thus, large-amplitude multi-dimensional waves generate high-energy transverse-streaming ions and eventually limit energetic backward-streaming ions along the longitudinal direction. The multi-dimensional study sheds light on interactions between longitudinal and transverse electrostatic plasma instabilities, as well as fundamental characteristics of the inception and sustenance of unmagnetised plasma turbulence.

The problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs) presents two challenges: given a QA context (question and answer choice), methods need to (i) identify relevant knowledge from large KGs, and (ii) perform joint reasoning over the QA context and KG. In this work, we propose a new model, QA-GNN, which addresses the above challenges through two key innovations: (i) relevance scoring, where we use LMs to estimate the importance of KG nodes relative to the given QA context, and (ii) joint reasoning, where we connect the QA context and KG to form a joint graph, and mutually update their representations through graph neural networks. We evaluate QA-GNN on the CommonsenseQA and OpenBookQA datasets, and show its improvement over existing LM and LM+KG models, as well as its capability to perform interpretable and structured reasoning, e.g., correctly handling negation in questions.

The recent proliferation of knowledge graphs (KGs) coupled with incomplete or partial information, in the form of missing relations (links) between entities, has fueled a lot of research on knowledge base completion (also known as relation prediction). Several recent works suggest that convolutional neural network (CNN) based models generate richer and more expressive feature embeddings and hence also perform well on relation prediction. However, we observe that these KG embeddings treat triples independently and thus fail to cover the complex and hidden information that is inherently implicit in the local neighborhood surrounding a triple. To this effect, our paper proposes a novel attention based feature embedding that captures both entity and relation features in any given entity's neighborhood. Additionally, we also encapsulate relation clusters and multihop relations in our model. Our empirical study offers insights into the efficacy of our attention based model and we show marked performance gains in comparison to state of the art methods on all datasets.

北京阿比特科技有限公司