亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider the problem of estimating the false-/ true-positive-rate (FPR/TPR) for a binary classification model when there are incorrect labels (label noise) in the validation set. Our motivating application is fraud prevention where accurate estimates of FPR are critical to preserving the experience for good customers, and where label noise is highly asymmetric. Existing methods seek to minimize the total error in the cleaning process - to avoid cleaning examples that are not noise, and to ensure cleaning of examples that are. This is an important measure of accuracy but insufficient to guarantee good estimates of the true FPR or TPR for a model, and we show that using the model to directly clean its own validation data leads to underestimates even if total error is low. This indicates a need for researchers to pursue methods that not only reduce total error but also seek to de-correlate cleaning error with model scores.

相關內容

The NSGA-II is one of the most prominent algorithms to solve multi-objective optimization problems. Despite numerous successful applications, several studies have shown that the NSGA-II is less effective for larger numbers of objectives. In this work, we use mathematical runtime analyses to rigorously demonstrate and quantify this phenomenon. We show that even on the simple $m$-objective generalization of the discrete OneMinMax benchmark, where every solution is Pareto optimal, the NSGA-II also with large population sizes cannot compute the full Pareto front (objective vectors of all Pareto optima) in sub-exponential time when the number of objectives is at least three. The reason for this unexpected behavior lies in the fact that in the computation of the crowding distance, the different objectives are regarded independently. This is not a problem for two objectives, where any sorting of a pair-wise incomparable set of solutions according to one objective is also such a sorting according to the other objective (in the inverse order).

We propose the use of a lower or upper triangular sub-base matrix to replace the identity matrix in the source-check-channel-variable linking protomatrix of a double-protograph low-density parity-check joint-source-channel code (DP-LDPC JSCC). The elements along the diagonal of the proposed lower or upper triangular sub-base matrix are assigned as "1" and the other non-zero elements can take any non-negative integral values. Compared with the traditional DP-LDPC JSCC designs, the new designs show a theoretical channel threshold improvement of up to 0.41 dB and a simulated source symbol error rate improvement of up to 0.5 dB at an error rate of 1e-6.

The substitution lemma is a renowned theorem within the realm of lambda-calculus theory and concerns the interactional behaviour of the metasubstitution operation. In this work, we augment the lambda-calculus's grammar with an uninterpreted explicit substitution operator, which allows the use of our framework for different calculi with explicit substitutions. Our primary contribution lies in verifying that, despite these modifications, the substitution lemma continues to remain valid. This confirmation was achieved using the Coq proof assistant. Our formalization methodology employs a nominal approach, which provides a direct implementation of the alpha-equivalence concept. The strategy involved in variable renaming within the proofs presents a challenge, specially on ensuring an exploration of the implications of our extension to the grammar of the lambda-calculus.

Neural Networks (NN) provide a solid and reliable way of executing different types of applications, ranging from speech recognition to medical diagnosis, speeding up onerous and long workloads. The challenges involved in their implementation at the edge include providing diversity, flexibility, and sustainability. That implies, for instance, supporting evolving applications and algorithms energy-efficiently. Using hardware or software accelerators can deliver fast and efficient computation of the \acp{nn}, while flexibility can be exploited to support long-term adaptivity. Nonetheless, handcrafting an NN for a specific device, despite the possibility of leading to an optimal solution, takes time and experience, and that's why frameworks for hardware accelerators are being developed. This work-in-progress study focuses on exploring the possibility of combining the toolchain proposed by Ratto et al., which has the distinctive ability to favor adaptivity, with approximate computing. The goal will be to allow lightweight adaptable NN inference on FPGAs at the edge. Before that, the work presents a detailed review of established frameworks that adopt a similar streaming architecture for future comparison.

In the context of general rough sets, the act of combining two things to form another is not straightforward. The situation is similar for other theories that concern uncertainty and vagueness. Such acts can be endowed with additional meaning that go beyond structural conjunction and disjunction as in the theory of $*$-norms and associated implications over $L$-fuzzy sets. In the present research, algebraic models of acts of combining things in generalized rough sets over lattices with approximation operators (called rough convenience lattices) is invented. The investigation is strongly motivated by the desire to model skeptical or pessimistic, and optimistic or possibilistic aggregation in human reasoning, and the choice of operations is constrained by the perspective. Fundamental results on the weak negations and implications afforded by the minimal models are proved. In addition, the model is suitable for the study of discriminatory/toxic behavior in human reasoning, and of ML algorithms learning such behavior.

Orthogonal time frequency space (OTFS) is a modulation technique which is robust against the disruptive effects of doubly-selective channels. In this paper, we perform an experimental study of OTFS by a real-time software defined radio (SDR) setup. Our SDR consists of a Graphical Processing Unit (GPU) for signal processing programmed using Sionna and TensorFlow, and Universal Software Radio Peripheral (USRP) devices for air interface. We implement a low-latency transceiver structure for OTFS and investigate its performance under various Doppler values. By comparing the performance of OTFS with Orthogonal Frequency Division Multiplexing (OFDM), we demonstrate that OTFS is highly robust against the disruptive effects of doubly-selective channels in a real-time experimental setup.

Statisticians are not only one of the earliest professional adopters of data visualization, but also some of its most prolific users. Understanding how these professionals utilize visual representations in their analytic process may shed light on best practices for visual sensemaking. We present results from an interview study involving 18 professional statisticians (19.7 years average in the profession) on three aspects: (1) their use of visualization in their daily analytic work; (2) their mental models of inferential statistical processes; and (3) their design recommendations for how to best represent statistical inferences. Interview sessions consisted of discussing inferential statistics, eliciting participant sketches of suitable visual designs, and finally, a design intervention with our proposed visual designs. We analyzed interview transcripts using thematic analysis and open coding, deriving thematic codes on statistical mindset, analytic process, and analytic toolkit. The key findings for each aspect are as follows: (1) statisticians make extensive use of visualization during all phases of their work (and not just when reporting results); (2) their mental models of inferential methods tend to be mostly visually based; and (3) many statisticians abhor dichotomous thinking. The latter suggests that a multi-faceted visual display of inferential statistics that includes a visual indicator of analytically important effect sizes may help to balance the attributed epistemic power of traditional statistical testing with an awareness of the uncertainty of sensemaking.

Prognostics and Health Management (PHM) is a discipline focused on predicting the point at which systems or components will cease to perform as intended, typically measured as Remaining Useful Life (RUL). RUL serves as a vital decision-making tool for contingency planning, guiding the timing and nature of system maintenance. Historically, PHM has primarily been applied to hardware systems, with its application to software only recently explored. In a recent study we introduced a methodology and demonstrated how changes in software can impact the RUL of software. However, in practical software development, real-time performance is also influenced by various environmental attributes, including operating systems, clock speed, processor performance, RAM, machine core count and others. This research extends the analysis to assess how changes in environmental attributes, such as operating system and clock speed, affect RUL estimation in software. Findings are rigorously validated using real performance data from controlled test beds and compared with predictive model-generated data. Statistical validation, including regression analysis, supports the credibility of the results. The controlled test bed environment replicates and validates faults from real applications, ensuring a standardized assessment platform. This exploration yields actionable knowledge for software maintenance and optimization strategies, addressing a significant gap in the field of software health management.

The estimation of cumulative distribution functions (CDF) and probability density functions (PDF) is a fundamental practice in applied statistics. However, challenges often arise when dealing with data arranged in grouped intervals. In this paper, we discuss a suitable and highly flexible non-parametric density estimation approach for binned distributions, based on cubic monotonicity-preserving splines - known as cubic spline interpolation. Results from simulation studies demonstrate that this approach outperforms many widely used heuristic methods. Additionally, the application of this method to a dataset of train delays in Germany and micro census data on distance and travel time to work yields both meaningful but also some questionable results.

Object detection typically assumes that training and test data are drawn from an identical distribution, which, however, does not always hold in practice. Such a distribution mismatch will lead to a significant performance drop. In this work, we aim to improve the cross-domain robustness of object detection. We tackle the domain shift on two levels: 1) the image-level shift, such as image style, illumination, etc, and 2) the instance-level shift, such as object appearance, size, etc. We build our approach based on the recent state-of-the-art Faster R-CNN model, and design two domain adaptation components, on image level and instance level, to reduce the domain discrepancy. The two domain adaptation components are based on H-divergence theory, and are implemented by learning a domain classifier in adversarial training manner. The domain classifiers on different levels are further reinforced with a consistency regularization to learn a domain-invariant region proposal network (RPN) in the Faster R-CNN model. We evaluate our newly proposed approach using multiple datasets including Cityscapes, KITTI, SIM10K, etc. The results demonstrate the effectiveness of our proposed approach for robust object detection in various domain shift scenarios.

北京阿比特科技有限公司