亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Curb detection is essential for environmental awareness in Automated Driving (AD), as it typically limits drivable and non-drivable areas. Annotated data are necessary for developing and validating an AD function. However, the number of public datasets with annotated point cloud curbs is scarce. This paper presents a method for detecting 3D curbs in a sequence of point clouds captured from a LiDAR sensor, which consists of two main steps. First, our approach detects the curbs at each scan using a segmentation deep neural network. Then, a sequence-level processing step estimates the 3D curbs in the reconstructed point cloud using the odometry of the vehicle. From these 3D points of the curb, we obtain polylines structured following ASAM OpenLABEL standard. These detections can be used as pre-annotations in labelling pipelines to efficiently generate curb-related ground truth data. We validate our approach through an experiment in which different human annotators were required to annotate curbs in a group of LiDAR-based sequences with and without our automatically generated pre-annotations. The results show that the manual annotation time is reduced by 50.99% thanks to our detections, keeping the data quality level.

相關內容

Automator是蘋果公司為他們的Mac OS X系統開發的一款軟件。 只要通過點擊拖拽鼠標等操作就可以將一系列動作組合成一個工作流,從而幫助你自動的(可重復的)完成一些復雜的工作。Automator還能橫跨很多不同種類的程序,包括:查找器、Safari網絡瀏覽器、iCal、地址簿或者其他的一些程序。它還能和一些第三方的程序一起工作,如微軟的Office、Adobe公司的Photoshop或者Pixelmator等。

In instrumental variable (IV) settings, such as in imperfect randomized trials and observational studies with Mendelian randomization, one may encounter a continuous exposure, the causal effect of which is not of true interest. Instead, scientific interest may lie in a coarsened version of this exposure. Although there is a lengthy literature on the impact of coarsening of an exposure with several works focusing specifically on IV settings, all methods proposed in this literature require parametric assumptions. Instead, just as in the standard IV setting, one can consider partial identification via bounds making no parametric assumptions. This was first pointed out in Alexander Balke's PhD dissertation. We extend and clarify his work and derive novel bounds in several settings, including for a three-level IV, which will most likely be the case in Mendelian randomization. We demonstrate our findings in two real data examples, a randomized trial for peanut allergy in infants and a Mendelian randomization setting investigating the effect of homocysteine on cardiovascular disease.

Large Vision-Language Models (VLMs) have demonstrated impressive performance on complex tasks involving visual input with natural language instructions. However, it remains unclear to what extent capabilities on natural images transfer to Earth observation (EO) data, which are predominantly satellite and aerial images less common in VLM training data. In this work, we propose a comprehensive benchmark to gauge the progress of VLMs toward being useful tools for EO data by assessing their abilities on scene understanding, localization and counting, and change detection tasks. Motivated by real-world applications, our benchmark includes scenarios like urban monitoring, disaster relief, land use, and conservation. We discover that, although state-of-the-art VLMs like GPT-4V possess extensive world knowledge that leads to strong performance on open-ended tasks like location understanding and image captioning, their poor spatial reasoning limits usefulness on object localization and counting tasks. Our benchmark will be made publicly available at //vleo.danielz.ch/ and on Hugging Face at //huggingface.co/collections/mit-ei/vleo-benchmark-datasets-65b789b0466555489cce0d70 for easy model evaluation.

Stepped wedge cluster randomized experiments represent a class of unidirectional crossover designs increasingly adopted for comparative effectiveness and implementation science research. Although stepped wedge cluster randomized experiments have become popular, definitions of estimands and robust methods to target clearly-defined estimands remain insufficient. To address this gap, we describe a class of estimands that explicitly acknowledge the multilevel data structure in stepped wedge cluster randomized experiments, and highlight three typical members of the estimand class that are interpretable and are of practical interest. We then introduce four possible formulations of analysis of covariance (ANCOVA) working models to achieve estimand-aligned analyses. By exploiting baseline covariates, each ANCOVA model can potentially improve the estimation efficiency over the unadjusted estimators. In addition, each ANCOVA estimator is model-assisted in the sense that its point estimator is consistent with the target estimand even when the working model is misspecified. Under the stepped wedge randomization scheme, we establish the finite population Central Limit Theorem for each estimator, which motivates design-based variance estimators. Through simulations, we study the finite-sample operating characteristics of the ANCOVA estimators under different data-generating processes. We illustrate their applications via the analysis of the Washington State Expedited Partner Therapy study.

Lattices are architected metamaterials whose properties strongly depend on their geometrical design. The analogy between lattices and graphs enables the use of graph neural networks (GNNs) as a faster surrogate model compared to traditional methods such as finite element modelling. In this work we present a higher-order GNN model trained to predict the fourth-order stiffness tensor of periodic strut-based lattices. The key features of the model are (i) SE(3) equivariance, and (ii) consistency with the thermodynamic law of conservation of energy. We compare the model to non-equivariant models based on a number of error metrics and demonstrate the benefits of the encoded equivariance and energy conservation in terms of predictive performance and reduced training requirements.

A popular method for variance reduction in observational causal inference is propensity-based trimming, the practice of removing units with extreme propensities from the sample. This practice has theoretical grounding when the data are homoscedastic and the propensity model is parametric (Yang and Ding, 2018; Crump et al. 2009), but in modern settings where heteroscedastic data are analyzed with non-parametric models, existing theory fails to support current practice. In this work, we address this challenge by developing new methods and theory for sample trimming. Our contributions are three-fold: first, we describe novel procedures for selecting which units to trim. Our procedures differ from previous work in that we trim not only units with small propensities, but also units with extreme conditional variances. Second, we give new theoretical guarantees for inference after trimming. In particular, we show how to perform inference on the trimmed subpopulation without requiring that our regressions converge at parametric rates. Instead, we make only fourth-root rate assumptions like those in the double machine learning literature. This result applies to conventional propensity-based trimming as well and thus may be of independent interest. Finally, we propose a bootstrap-based method for constructing simultaneously valid confidence intervals for multiple trimmed sub-populations, which are valuable for navigating the trade-off between sample size and variance reduction inherent in trimming. We validate our methods in simulation, on the 2007-2008 National Health and Nutrition Examination Survey, and on a semi-synthetic Medicare dataset and find promising results in all settings.

A new approach to the local and global explanation is proposed. It is based on selecting a convex hull constructed for the finite number of points around an explained instance. The convex hull allows us to consider a dual representation of instances in the form of convex combinations of extreme points of a produced polytope. Instead of perturbing new instances in the Euclidean feature space, vectors of convex combination coefficients are uniformly generated from the unit simplex, and they form a new dual dataset. A dual linear surrogate model is trained on the dual dataset. The explanation feature importance values are computed by means of simple matrix calculations. The approach can be regarded as a modification of the well-known model LIME. The dual representation inherently allows us to get the example-based explanation. The neural additive model is also considered as a tool for implementing the example-based explanation approach. Many numerical experiments with real datasets are performed for studying the approach. The code of proposed algorithms is available.

In this paper, we develop a class of high-order conservative methods for simulating non-equilibrium radiation diffusion problems. Numerically, this system poses significant challenges due to strong nonlinearity within the stiff source terms and the degeneracy of nonlinear diffusion terms. Explicit methods require impractically small time steps, while implicit methods, which offer stability, come with the challenge to guarantee the convergence of nonlinear iterative solvers. To overcome these challenges, we propose a predictor-corrector approach and design proper implicit-explicit time discretizations. In the predictor step, the system is reformulated into a nonconservative form and linear diffusion terms are introduced as a penalization to mitigate strong nonlinearities. We then employ a Picard iteration to secure convergence in handling the nonlinear aspects. The corrector step guarantees the conservation of total energy, which is vital for accurately simulating the speeds of propagating sharp fronts in this system. For spatial approximations, we utilize local discontinuous Galerkin finite element methods, coupled with positive-preserving and TVB limiters. We validate the orders of accuracy, conservation properties, and suitability of using large time steps for our proposed methods, through numerical experiments conducted on one- and two-dimensional spatial problems. In both homogeneous and heterogeneous non-equilibrium radiation diffusion problems, we attain a time stability condition comparable to that of a fully implicit time discretization. Such an approach is also applicable to many other reaction-diffusion systems.

Logistic regression is a ubiquitous method for probabilistic classification. However, the effectiveness of logistic regression depends upon careful and relatively computationally expensive tuning, especially for the regularisation hyperparameter, and especially in the context of high-dimensional data. We present a prevalidated ridge regression model that closely matches logistic regression in terms of classification error and log-loss, particularly for high-dimensional data, while being significantly more computationally efficient and having effectively no hyperparameters beyond regularisation. We scale the coefficients of the model so as to minimise log-loss for a set of prevalidated predictions derived from the estimated leave-one-out cross-validation error. This exploits quantities already computed in the course of fitting the ridge regression model in order to find the scaling parameter with nominal additional computational expense.

Let R_eps denote randomized query complexity for error probability eps, and R:=R_{1/3}. In this work we investigate whether a perfect composition theorem R(f o g^n)=Omega(R(f).R(g)) holds for a relation f in {0,1}^n * S and a total inner function g:{0,1}^m \to {0, 1}. Let D^(prod) denote the maximum distributional query complexity with respect to any product (over variables) distribution. In this work we show the composition theorem R(f o g^n)=Omega(R(f).D^{prod}(g)) up to logarithmic factors. In light of the minimax theorem which states that R(g) is the maximum distributional complexity of g over any distribution, our result makes progress towards answering the composition question. We prove our result by means of a complexity measure R^(prod)_(eps) that we define for total Boolean functions. We show it to be equivalent (up to logarithmic factors) to the sabotage complexity measure RS() defined by Ben-David and Kothari (ICALP 2019): RS(g) = Theta(R^(prod)_(1/3)(g)) (up to log factors). We ask if our bound RS(g) = Omega(D^(prod)(g)) (up to log factors) is tight. We answer this question in the negative, by showing that for the NAND tree function, sabotage complexity is polynomially larger than D^(prod). Our proof yields an alternative and different derivation of the tight lower bound on the bounded error randomized query complexity of the NAND tree function (originally proved by Santha in 1985), which may be of independent interest. Our result gives an explicit polynomial separation between R and D^(prod) which, to our knowledge, was not known prior to our work.

Disaggregated evaluation is a central task in AI fairness assessment, with the goal to measure an AI system's performance across different subgroups defined by combinations of demographic or other sensitive attributes. The standard approach is to stratify the evaluation data across subgroups and compute performance metrics separately for each group. However, even for moderately-sized evaluation datasets, sample sizes quickly get small once considering intersectional subgroups, which greatly limits the extent to which intersectional groups are considered in many disaggregated evaluations. In this work, we introduce a structured regression approach to disaggregated evaluation that we demonstrate can yield reliable system performance estimates even for very small subgroups. We also provide corresponding inference strategies for constructing confidence intervals and explore how goodness-of-fit testing can yield insight into the structure of fairness-related harms experienced by intersectional groups. We evaluate our approach on two publicly available datasets, and several variants of semi-synthetic data. The results show that our method is considerably more accurate than the standard approach, especially for small subgroups, and goodness-of-fit testing helps identify the key factors that drive differences in performance.

北京阿比特科技有限公司