亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Medical imaging models have been shown to encode information about patient demographics such as age, race, and sex in their latent representation, raising concerns about their potential for discrimination. Here, we ask whether requiring models not to encode demographic attributes is desirable. We point out that marginal and class-conditional representation invariance imply the standard group fairness notions of demographic parity and equalized odds, respectively, while additionally requiring risk distribution matching, thus potentially equalizing away important group differences. Enforcing the traditional fairness notions directly instead does not entail these strong constraints. Moreover, representationally invariant models may still take demographic attributes into account for deriving predictions. The latter can be prevented using counterfactual notions of (individual) fairness or invariance. We caution, however, that properly defining medical image counterfactuals with respect to demographic attributes is highly challenging. Finally, we posit that encoding demographic attributes may even be advantageous if it enables learning a task-specific encoding of demographic features that does not rely on social constructs such as 'race' and 'gender.' We conclude that demographically invariant representations are neither necessary nor sufficient for fairness in medical imaging. Models may need to encode demographic attributes, lending further urgency to calls for comprehensive model fairness assessments in terms of predictive performance across diverse patient groups.

相關內容

Dataset Distillation (DD) is a prominent technique that encapsulates knowledge from a large-scale original dataset into a small synthetic dataset for efficient training. Meanwhile, Pre-trained Models (PTMs) function as knowledge repositories, containing extensive information from the original dataset. This naturally raises a question: Can PTMs effectively transfer knowledge to synthetic datasets, guiding DD accurately? To this end, we conduct preliminary experiments, confirming the contribution of PTMs to DD. Afterwards, we systematically study different options in PTMs, including initialization parameters, model architecture, training epoch and domain knowledge, revealing that: 1) Increasing model diversity enhances the performance of synthetic datasets; 2) Sub-optimal models can also assist in DD and outperform well-trained ones in certain cases; 3) Domain-specific PTMs are not mandatory for DD, but a reasonable domain match is crucial. Finally, by selecting optimal options, we significantly improve the cross-architecture generalization over baseline DD methods. We hope our work will facilitate researchers to develop better DD techniques. Our code is available at //github.com/yaolu-zjut/DDInterpreter.

We rigorously study the joint evolution of training dynamics via stochastic gradient descent (SGD) and the spectra of empirical Hessian and gradient matrices. We prove that in two canonical classification tasks for multi-class high-dimensional mixtures and either 1 or 2-layer neural networks, the SGD trajectory rapidly aligns with emerging low-rank outlier eigenspaces of the Hessian and gradient matrices. Moreover, in multi-layer settings this alignment occurs per layer, with the final layer's outlier eigenspace evolving over the course of training, and exhibiting rank deficiency when the SGD converges to sub-optimal classifiers. This establishes some of the rich predictions that have arisen from extensive numerical studies in the last decade about the spectra of Hessian and information matrices over the course of training in overparametrized networks.

The authors are concerned about the safety, health, and rights of the European citizens due to inadequate measures and procedures required by the current draft of the EU Artificial Intelligence (AI) Act for the conformity assessment of AI systems. We observe that not only the current draft of the EU AI Act, but also the accompanying standardization efforts in CEN/CENELEC, have resorted to the position that real functional guarantees of AI systems supposedly would be unrealistic and too complex anyways. Yet enacting a conformity assessment procedure that creates the false illusion of trust in insufficiently assessed AI systems is at best naive and at worst grossly negligent. The EU AI Act thus misses the point of ensuring quality by functional trustworthiness and correctly attributing responsibilities. The trustworthiness of an AI decision system lies first and foremost in the correct statistical testing on randomly selected samples and in the precision of the definition of the application domain, which enables drawing samples in the first place. We will subsequently call this testable quality functional trustworthiness. It includes a design, development, and deployment that enables correct statistical testing of all relevant functions. We are firmly convinced and advocate that a reliable assessment of the statistical functional properties of an AI system has to be the indispensable, mandatory nucleus of the conformity assessment. In this paper, we describe the three necessary elements to establish a reliable functional trustworthiness, i.e., (1) the definition of the technical distribution of the application, (2) the risk-based minimum performance requirements, and (3) the statistically valid testing based on independent random samples.

We present a large empirical investigation on the use of pre-trained visual representations (PVRs) for training downstream policies that execute real-world tasks. Our study spans five different PVRs, two different policy-learning paradigms (imitation and reinforcement learning), and three different robots for 5 distinct manipulation and indoor navigation tasks. From this effort, we can arrive at three insights: 1) the performance trends of PVRs in the simulation are generally indicative of their trends in the real world, 2) the use of PVRs enables a first-of-its-kind result with indoor ImageNav (zero-shot transfer to a held-out scene in the real world), and 3) the benefits from variations in PVRs, primarily data-augmentation and fine-tuning, also transfer to the real-world performance. See project website for additional details and visuals.

Recently, Sato et al. proposed an public verifiable blind quantum computation (BQC) protocol by inserting a third-party arbiter. However, it is not true public verifiable in a sense, because the arbiter is determined in advance and participates in the whole process. In this paper, a public verifiable protocol for measurement-only BQC is proposed. The fidelity between arbitrary states and the graph states of 2-colorable graphs is estimated by measuring the entanglement witnesses of the graph states,so as to verify the correctness of the prepared graph states. Compared with the previous protocol, our protocol is public verifiable in the true sense by allowing other random clients to execute the public verification. It also has greater advantages in the efficiency, where the number of local measurements is O(n^3*log {n}) and graph states' copies is O(n^2*log{n}).

We propose an abstract framework for analyzing the convergence of least-squares methods based on residual minimization when feasible solutions are neural networks. With the norm relations and compactness arguments, we derive error estimates for both continuous and discrete formulations of residual minimization in strong and weak forms. The formulations cover recently developed physics-informed neural networks based on strong and variational formulations.

This chapter delves into the realm of computational complexity, exploring the world of challenging combinatorial problems and their ties with statistical physics. Our exploration starts by delving deep into the foundations of combinatorial challenges, emphasizing their nature. We will traverse the class P, which comprises problems solvable in polynomial time using deterministic algorithms, contrasting it with the class NP, where finding efficient solutions remains an enigmatic endeavor, understanding the intricacies of algorithmic transitions and thresholds demarcating the boundary between tractable and intractable problems. We will discuss the implications of the P versus NP problem, representing one of the profoundest unsolved enigmas of computer science and mathematics, bearing a tantalizing reward for its resolution. Drawing parallels between combinatorics and statistical physics, we will uncover intriguing interconnections that shed light on the nature of challenging problems. Statistical physics unveils profound analogies with complexities witnessed in combinatorial landscapes. Throughout this chapter, we will discuss the interplay between computational complexity theory and statistical physics. By unveiling the mysteries surrounding challenging problems, we aim to deepen understanding of the very essence of computation and its boundaries. Through this interdisciplinary approach, we aspire to illuminate the intricate tapestry of complexity underpinning the mathematical and physical facets of hard problems.

NVIDIA researchers have pioneered an explicit method, position-based dynamics (PBD), for simulating systems with contact forces, gaining widespread use in computer graphics and animation. While the method yields visually compelling real-time simulations with surprising numerical stability, its scientific validity has been questioned due to a lack of rigorous analysis. In this paper, we introduce a new mathematical convergence analysis specifically tailored for PBD applied to first-order dynamics. Utilizing newly derived bounds for projections onto uniformly prox-regular sets, our proof extends classical compactness arguments. Our work paves the way for the reliable application of PBD in various scientific and engineering fields, including particle simulations with volume exclusion, agent-based models in mathematical biology or inequality-constrained gradient-flow models.

Sewer pipe network systems are an important part of civil infrastructure, and in order to find a good trade-off between maintenance costs and system performance, reliable sewer pipe degradation models are essential. In this paper, we present a large-scale case study in the city of Breda in the Netherlands. Our dataset has information on sewer pipes built since the 1920s and contains information on different covariates. We also have several types of damage, but we focus our attention on infiltrations, surface damage, and cracks. Each damage has an associated severity index ranging from 1 to 5. To account for the characteristics of sewer pipes, we defined 6 cohorts of interest. Two types of discrete-time Markov chains (DTMC), which we called Chain `Multi' and `Single' (where Chain `Multi' contains additional transitions compared to Chain `Single'), are commonly used to model sewer pipe degradation at the pipeline level, and we want to evaluate which suits better our case study. To calibrate the DTMCs, we define an optimization process using Sequential Least-Squares Programming to find the DTMC parameter that best minimizes the root mean weighted square error. Our results show that for our case study, there is no substantial difference between Chain `Multi' and `Single', but the latter has fewer parameters and can be easily trained. Our DTMCs are useful to compare the cohorts via the expected values, e.g., concrete pipes carrying mixed and waste content reach severe levels of surface damage more quickly compared to concrete pipes carrying rainwater, which is a phenomenon typically identified in practice.

Individuals, businesses, and governments all face additional difficulties because of the rise of sophisticated cyberattack attacks. This paper investigates the targeting of journalists and activists by the malware Pegasus. To gain a deeper understanding of the tactics utilized by cybercriminals and the vulnerabilities that facilitate their scope, this research looks on numerous occurrences and identifies recurring patterns in the strategies, methods, and practices employed. In this paper, a comprehensive analysis is conducted on the far-reaching consequences of these attacks for cybersecurity policy, encompassing the pressing need for enhanced threat intelligence sharing mechanisms, the implementation of more resilient incident response protocols, and the allocation of greater financial resources towards the advancement of cybersecurity research and development initiatives. The research also discusses how Pegasus will affect SCADA systems and critical infrastructure, and it describes some of the most important tactics that businesses may use to reduce the danger of cyberattacks and safeguard themselves against the 21st century's growing threats. The extent of Pegasus spyware, which can access various data and communications on mobile devices running iOS and Android potentially jeopardise the civil rights and privacy of journalists, activists, and political leaders throughout the world, was found to be worrying

北京阿比特科技有限公司