亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper addresses category-agnostic instance segmentation for robotic manipulation, focusing on segmenting objects independent of their class to enable versatile applications like bin-picking in dynamic environments. Existing methods often lack generalizability and object-specific information, leading to grasp failures. We present a novel approach leveraging object-centric instance segmentation and simulation-based training for effective transfer to real-world scenarios. Notably, our strategy overcomes challenges posed by noisy depth sensors, enhancing the reliability of learning. Our solution accommodates transparent and semi-transparent objects which are historically difficult for depth-based grasping methods. Contributions include domain randomization for successful transfer, our collected dataset for warehouse applications, and an integrated framework for efficient bin-picking. Our trained instance segmentation model achieves state-of-the-art performance over WISDOM public benchmark [1] and also over the custom-created dataset. In a real-world challenging bin-picking setup our bin-picking framework method achieves 98% accuracy for opaque objects and 97% accuracy for non-opaque objects, outperforming the state-of-the-art baselines with a greater margin.

相關內容

This paper is concerned with the problem of sampling and interpolation involving derivatives in shift-invariant spaces and the error analysis of the derivative sampling expansions for fundamentally large classes of functions. A new type of polynomials based on derivative samples is introduced, which is different from the Euler-Frobenius polynomials for the multiplicity $r>1$. A complete characterization of uniform sampling with derivatives is given using Laurent operators. The rate of approximation of a signal (not necessarily continuous) by the derivative sampling expansions in shift-invariant spaces generated by compactly supported functions is established in terms of $L^p$- average modulus of smoothness. Finally, several typical examples illustrating the various problems are discussed in detail.

We present an information-theoretic lower bound for the problem of parameter estimation with time-uniform coverage guarantees. Via a new a reduction to sequential testing, we obtain stronger lower bounds that capture the hardness of the time-uniform setting. In the case of location model estimation, logistic regression, and exponential family models, our $\Omega(\sqrt{n^{-1}\log \log n})$ lower bound is sharp to within constant factors in typical settings.

Quality assessment, including inspecting the images for artifacts, is a critical step during MRI data acquisition to ensure data quality and downstream analysis or interpretation success. This study demonstrates a deep learning model to detect rigid motion in T1-weighted brain images. We leveraged a 2D CNN for three-class classification and tested it on publicly available retrospective and prospective datasets. Grad-CAM heatmaps enabled the identification of failure modes and provided an interpretation of the model's results. The model achieved average precision and recall metrics of 85% and 80% on six motion-simulated retrospective datasets. Additionally, the model's classifications on the prospective dataset showed a strong inverse correlation (-0.84) compared to average edge strength, an image quality metric indicative of motion. This model is part of the ArtifactID tool, aimed at inline automatic detection of Gibbs ringing, wrap-around, and motion artifacts. This tool automates part of the time-consuming QA process and augments expertise on-site, particularly relevant in low-resource settings where local MR knowledge is scarce.

The goal of this paper is to investigate a family of optimization problems arising from list homomorphisms, and to understand what the best possible algorithms are if we restrict the problem to bounded-treewidth graphs. For a fixed $H$, the input of the optimization problem LHomVD($H$) is a graph $G$ with lists $L(v)$, and the task is to find a set $X$ of vertices having minimum size such that $(G-X,L)$ has a list homomorphism to $H$. We define analogously the edge-deletion variant LHomED($H$). This expressive family of problems includes members that are essentially equivalent to fundamental problems such as Vertex Cover, Max Cut, Odd Cycle Transversal, and Edge/Vertex Multiway Cut. For both variants, we first characterize those graphs $H$ that make the problem polynomial-time solvable and show that the problem is NP-hard for every other fixed $H$. Second, as our main result, we determine for every graph $H$ for which the problem is NP-hard, the smallest possible constant $c_H$ such that the problem can be solved in time $c^t_H\cdot n^{O(1)}$ if a tree decomposition of $G$ having width $t$ is given in the input.Let $i(H)$ be the maximum size of a set of vertices in $H$ that have pairwise incomparable neighborhoods. For the vertex-deletion variant LHomVD($H$), we show that the smallest possible constant is $i(H)+1$ for every $H$. The situation is more complex for the edge-deletion version. For every $H$, one can solve LHomED($H$) in time $i(H)^t\cdot n^{O(1)}$ if a tree decomposition of width $t$ is given. However, the existence of a specific type of decomposition of $H$ shows that there are graphs $H$ where LHomED($H$) can be solved significantly more efficiently and the best possible constant can be arbitrarily smaller than $i(H)$. Nevertheless, we determine this best possible constant and (assuming the SETH) prove tight bounds for every fixed $H$.

This paper takes a different look on the problem of testing the mutual independence of the components of a high-dimensional vector. Instead of testing if all pairwise associations (e.g. all pairwise Kendall's $\tau$) between the components vanish, we are interested in the (null)-hypothesis that all pairwise associations do not exceed a certain threshold in absolute value. The consideration of these hypotheses is motivated by the observation that in the high-dimensional regime, it is rare, and perhaps impossible, to have a null hypothesis that can be exactly modeled by assuming that all pairwise associations are precisely equal to zero. The formulation of the null hypothesis as a composite hypothesis makes the problem of constructing tests non-standard and in this paper we provide a solution for a broad class of dependence measures, which can be estimated by $U$-statistics. In particular we develop an asymptotic and a bootstrap level $\alpha$-test for the new hypotheses in the high-dimensional regime. We also prove that the new tests are minimax-optimal and investigate their finite sample properties by means of a small simulation study and a data example.

This paper addresses the challenge of optimizing meta-parameters (i.e., hyperparameters) in machine learning algorithms, a critical factor influencing training efficiency and model performance. Moving away from the computationally expensive traditional meta-parameter search methods, we introduce MetaOptimize framework that dynamically adjusts meta-parameters, particularly step sizes (also known as learning rates), during training. More specifically, MetaOptimize can wrap around any first-order optimization algorithm, tuning step sizes on the fly to minimize a specific form of regret that accounts for long-term effect of step sizes on training, through a discounted sum of future losses. We also introduce low complexity variants of MetaOptimize that, in conjunction with its adaptability to multiple optimization algorithms, demonstrate performance competitive to those of best hand-crafted learning rate schedules across various machine learning applications.

We propose a hybrid iterative method based on MIONet for PDEs, which combines the traditional numerical iterative solver and the recent powerful machine learning method of neural operator, and further systematically analyze its theoretical properties, including the convergence condition, the spectral behavior, as well as the convergence rate, in terms of the errors of the discretization and the model inference. We show the theoretical results for the frequently-used smoothers, i.e. Richardson (damped Jacobi) and Gauss-Seidel. We give an upper bound of the convergence rate of the hybrid method w.r.t. the model correction period, which indicates a minimum point to make the hybrid iteration converge fastest. Several numerical examples including the hybrid Richardson (Gauss-Seidel) iteration for the 1-d (2-d) Poisson equation are presented to verify our theoretical results, and also reflect an excellent acceleration effect. As a meshless acceleration method, it is provided with enormous potentials for practice applications.

The characterization of complex networks with tools originating in geometry, for instance through the statistics of so-called Ricci curvatures, is a well established tool of network science. There exist various types of such Ricci curvatures, capturing different aspects of network geometry. In the present work, we investigate Bakry-\'Emery-Ricci curvature, a notion of discrete Ricci curvature that has been studied much in geometry, but so far has not been applied to networks. We explore on standard classes of artificial networks as well as on selected empirical ones to what the statistics of that curvature are similar to or different from that of other curvatures, how it is correlated to other important network measures, and what it tells us about the underlying network. We observe that most vertices typically have negative curvature. Random and small-world networks exhibit a narrow curvature distribution whereas other classes and most of the real-world networks possess a wide curvature distribution. When we compare Bakry-\'Emery-Ricci curvature with two other discrete notions of Ricci-curvature, Forman-Ricci and Ollivier-Ricci curvature for both model and real-world networks, we observe a high positive correlation between Bakry-\'Emery-Ricci and both Forman-Ricci and Ollivier-Ricci curvature, and in particular with the augmented version of Forman-Ricci curvature. Bakry-\'Emery-Ricci curvature also exhibits a high negative correlation with the vertex centrality measure and degree for most of the model and real-world networks. However, it does not correlate with the clustering coefficient. Also, we investigate the importance of vertices with highly negative curvature values to maintain communication in the network. The computational time for Bakry-\'Emery-Ricci curvature is shorter than that required for Ollivier-Ricci curvature but higher than for Augmented Forman-Ricci curvature.

Limited amount of data and data sharing restrictions, due to GDPR compliance, constitute two common factors leading to reduced availability and accessibility when referring to medical data. To tackle these issues, we introduce the technique of Learning Using Privileged Information. Aiming to substantiate the idea, we attempt to build a robust model that improves the segmentation quality of tumors on digital mammograms, by gaining privileged information knowledge during the training procedure. Towards this direction, a baseline model, called student, is trained on patches extracted from the original mammograms, while an auxiliary model with the same architecture, called teacher, is trained on the corresponding enhanced patches accessing, in this way, privileged information. We repeat the student training procedure by providing the assistance of the teacher model this time. According to the experimental results, it seems that the proposed methodology performs better in the most of the cases and it can achieve 10% higher F1 score in comparison with the baseline.

This paper introduces a new local plastic correction algorithm developed to accelerate finite element simulations for structures with elasto-plastic constitutive laws. The proposed method belongs to the category of generalized multiaxial Neuber-type methods enabled by pointwise proportional evolution rules. The algorithm numerically integrates J2 plasticity laws as a function of the finite element elastic response of the structure, to obtain full-field 3D elasto-plastic quantities for any proportionally applied loading. Examples of the numerical capabilities of this algorithm are shown on a structure containing a distribution of pores, for monotonic and fatigue loading. The approximation errors due to the proposed local plastic correction are also investigated. As a second point of innovation, we show that the proposed local plastic correction can be accelerated when dealing with large-scale structures by employing a simple meta-model, with virtually no added errors. Finally, we develop and investigate the merits of an additional deep-learning-based corrective layer to reduce approximations errors on a subset of structures for which full elasto-plastic FE simulations are performed, the solutions of which are subsequently used as training set for a Convolutional Neural Network algorithm designed to learn the error between full FE and plastic correction approximations.

北京阿比特科技有限公司