亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We design an additive approximation scheme for estimating the cost of the min-weight bipartite matching problem: given a bipartite graph with non-negative edge costs and $\varepsilon > 0$, our algorithm estimates the cost of matching all but $O(\varepsilon)$-fraction of the vertices in truly subquadratic time $O(n^{2-\delta(\varepsilon)})$. Our algorithm has a natural interpretation for computing the Earth Mover's Distance (EMD), up to a $\varepsilon$-additive approximation. Notably, we make no assumptions about the underlying metric (more generally, the costs do not have to satisfy triangle inequality). Note that compared to the size of the instance (an arbitrary $n \times n$ cost matrix), our algorithm runs in {\em sublinear} time. Our algorithm can approximate a slightly more general problem: max-cardinality bipartite matching with a knapsack constraint, where the goal is to maximize the number of vertices that can be matched up to a total cost $B$.

相關內容

Multiphysics incompressible fluid dynamics simulations play a crucial role in understanding intricate behaviors of many complex engineering systems that involve interactions between solids, fluids, and various phases like liquid and gas. Numerical modeling of these interactions has generated significant research interest in recent decades and has led to the development of open source simulation tools and commercial software products targeting specific applications or general problem classes in computational fluid dynamics. As the demand increases for these simulations to adapt to platform heterogeneity, ensure composability between different physics models, and effectively utilize inheritance within partial differentiation systems, a fundamental reconsideration of numerical solver design becomes imperative. The discussion presented in this paper emphasizes the importance of these considerations and introduces the Flash-X approach as a potential solution. The software design strategies outlined in the article serve as a guide for Flash-X developers, providing insights into complexities associated with performance portability, composability, and sustainable development. These strategies provide a foundation for improving design of both new and existing simulation tools grappling with these challenges. By incorporating the principles outlined in the Flash-X approach, engineers and researchers can enhance the adaptability, efficiency, and overall effectiveness of their numerical solvers in the ever-evolving field of multiphysics simulations.

For image restoration, methods leveraging priors from generative models have been proposed and demonstrated a promising capacity to robustly restore photorealistic and high-quality results. However, these methods are susceptible to semantic ambiguity, particularly with images that have obviously correct semantics such as facial images. In this paper, we propose a semantic-aware latent space exploration method for image restoration (SAIR). By explicitly modeling semantics information from a given reference image, SAIR is able to reliably restore severely degraded images not only to high-resolution and highly realistic looks but also to correct semantics. Quantitative and qualitative experiments collectively demonstrate the superior performance of the proposed SAIR. Our code is available at //github.com/Liamkuo/SAIR.

The three state illness death model has been established as a general approach for regression analysis of semi competing risks data. For observational data the marginal structural models (MSM) are a useful tool, under the potential outcomes framework to define and estimate parameters with causal interpretations. In this paper we introduce a class of marginal structural illness death models for the analysis of observational semi competing risks data. We consider two specific such models, the Markov illness death MSM and the frailty based Markov illness death MSM. For interpretation purposes, risk contrasts under the MSMs are defined. Inference under the illness death MSM can be carried out using estimating equations with inverse probability weighting, while inference under the frailty based illness death MSM requires a weighted EM algorithm. We study the inference procedures under both MSMs using extensive simulations, and apply them to the analysis of mid life alcohol exposure on late life cognitive impairment as well as mortality using the Honolulu Asia Aging Study data set. The R codes developed in this work have been implemented in the R package semicmprskcoxmsm that is publicly available on CRAN.

Most existing causal discovery methods rely on the assumption of no latent confounders, limiting their applicability in solving real-life problems. In this paper, we introduce a novel, versatile framework for causal discovery that accommodates the presence of causally-related hidden variables almost everywhere in the causal network (for instance, they can be effects of observed variables), based on rank information of covariance matrix over observed variables. We start by investigating the efficacy of rank in comparison to conditional independence and, theoretically, establish necessary and sufficient conditions for the identifiability of certain latent structural patterns. Furthermore, we develop a Rank-based Latent Causal Discovery algorithm, RLCD, that can efficiently locate hidden variables, determine their cardinalities, and discover the entire causal structure over both measured and hidden ones. We also show that, under certain graphical conditions, RLCD correctly identifies the Markov Equivalence Class of the whole latent causal graph asymptotically. Experimental results on both synthetic and real-world personality data sets demonstrate the efficacy of the proposed approach in finite-sample cases.

Measurement-based quantum computing (MBQC) is a promising quantum computing paradigm that performs computation through ``one-way'' measurements on entangled quantum qubits. It is widely used in photonic quantum computing (PQC), where the computation is carried out on photonic cluster states (i.e., a 2-D mesh of entangled photons). In MBQC-based PQC, the cluster state depth (i.e., the length of one-way measurements) to execute a quantum circuit plays an important role in the overall execution time and error. Thus, it is important to reduce the cluster state depth. In this paper, we propose FMCC, a compilation framework that employs dynamic programming to efficiently minimize the cluster state depth. Experimental results on five representative quantum algorithms show that FMCC achieves 53.6%, 60.6%, and 60.0% average depth reductions in small, medium, and large qubit counts compared to the state-of-the-art MBQC compilation frameworks.

This paper introduces and characterizes a new family of continuous probability distributions applicable to norm distributions in three-dimensional random spaces, specifically for the Euclidean norm of three random Gaussian variables with non-zero means. The distribution is specified over the semi-infinite range $[0,\infty)$ and is notable for its computational tractability. Building on this foundation, we also introduce a separate family of continuous probability distributions suitable for power distributions in three-dimensional random spaces. Despite being previously unknown, these distributions are attractive for numerous applications, some of which are discussed in this work.

Low-precision fine-tuning of language models has gained prominence as a cost-effective and energy-efficient approach to deploying large-scale models in various applications. However, this approach is susceptible to the existence of outlier values in activation. The outlier values in the activation can negatively affect the performance of fine-tuning language models in the low-precision regime since they affect the scaling factor and thus make representing smaller values harder. This paper investigates techniques for mitigating outlier activation in low-precision integer fine-tuning of the language models. Our proposed novel approach enables us to represent the outlier activation values in 8-bit integers instead of floating-point (FP16) values. The benefit of using integers for outlier values is that it enables us to use operator tiling to avoid performing 16-bit integer matrix multiplication to address this problem effectively. We provide theoretical analysis and supporting experiments to demonstrate the effectiveness of our approach in improving the robustness and performance of low-precision fine-tuned language models.

Debugging physical computing projects provides a rich context to understand cross-disciplinary problem solving that integrates multiple domains of computing and engineering. Yet understanding and assessing students' learning of debugging remains a challenge, particularly in understudied areas such as physical computing, since finding and fixing hardware and software bugs is a deeply contextual practice. In this paper we draw on the rich history of clinical interviews to develop and pilot "failure artifact scenarios" in order to study changes in students' approaches to debugging and troubleshooting electronic textiles (e-textiles). We applied this clinical interview protocol before and after an eight-week-long e-textiles unit. We analyzed pre/post clinical interviews from 18 students at four different schools. The analysis revealed that students improved in identifying bugs with greater specificity, and across domains, and in considering multiple causes for bugs. We discuss implications for developing tools to assess students' debugging abilities through contextualized debugging scenarios in physical computing.

The existence of representative datasets is a prerequisite of many successful artificial intelligence and machine learning models. However, the subsequent application of these models often involves scenarios that are inadequately represented in the data used for training. The reasons for this are manifold and range from time and cost constraints to ethical considerations. As a consequence, the reliable use of these models, especially in safety-critical applications, is a huge challenge. Leveraging additional, already existing sources of knowledge is key to overcome the limitations of purely data-driven approaches, and eventually to increase the generalization capability of these models. Furthermore, predictions that conform with knowledge are crucial for making trustworthy and safe decisions even in underrepresented scenarios. This work provides an overview of existing techniques and methods in the literature that combine data-based models with existing knowledge. The identified approaches are structured according to the categories integration, extraction and conformity. Special attention is given to applications in the field of autonomous driving.

We introduce a generic framework that reduces the computational cost of object detection while retaining accuracy for scenarios where objects with varied sizes appear in high resolution images. Detection progresses in a coarse-to-fine manner, first on a down-sampled version of the image and then on a sequence of higher resolution regions identified as likely to improve the detection accuracy. Built upon reinforcement learning, our approach consists of a model (R-net) that uses coarse detection results to predict the potential accuracy gain for analyzing a region at a higher resolution and another model (Q-net) that sequentially selects regions to zoom in. Experiments on the Caltech Pedestrians dataset show that our approach reduces the number of processed pixels by over 50% without a drop in detection accuracy. The merits of our approach become more significant on a high resolution test set collected from YFCC100M dataset, where our approach maintains high detection performance while reducing the number of processed pixels by about 70% and the detection time by over 50%.

北京阿比特科技有限公司