亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider a matching problem in a bipartite graph $G$ where every vertex has a capacity and a strict preference order on its neighbors. Furthermore, there is a cost function on the edge set. We assume $G$ admits a perfect matching, i.e., one that fully matches all vertices. It is only perfect matchings that are feasible for us and we are interested in those perfect matchings that are popular within the set of perfect matchings. It is known that such matchings (called popular perfect matchings) always exist and can be efficiently computed. What we seek here is not any popular perfect matching, but a min-cost one. We show a polynomial-time algorithm for finding such a matching; this is via a characterization of popular perfect matchings in $G$ in terms of stable matchings in a colorful auxiliary instance. This is a generalization of such a characterization that was known in the one-to-one setting.

相關內容

We consider the dunking problem: a solid body at uniform temperature $T_{\text i}$ is placed in a environment characterized by farfield temperature $T_\infty$ and spatially uniform time-independent heat transfer coefficient. We permit heterogeneous material composition: spatially dependent density, specific heat, and thermal conductivity. Mathematically, the problem is described by a heat equation with Robin boundary conditions. The crucial parameter is the Biot number -- a nondimensional heat transfer (Robin) coefficient; we consider the limit of small Biot number. We introduce first-order and second-order asymptotic approximations (in Biot number) for several quantities of interest, notably the spatial domain average temperature as a function of time; the first-order approximation is simply the standard engineering `lumped' model. We then provide asymptotic error estimates for the first-order and second-order approximations for small Biot number, and also, for the first-order approximation, alternative strict bounds valid for all Biot number. Companion numerical solutions of the heat equation confirm the effectiveness of the error estimates for small Biot number. The second-order approximation and the first-order and second-order error estimates depend on several functional outputs associated to an elliptic partial differential equation; the latter is derived from Biot-sensitivity analysis of the heat equation eigenproblem in the limit of small Biot number. Most important is $\phi$, the only functional output required for the first-order error estimates; $\phi$ admits a simple physical interpretation in terms of conduction length scale. We investigate the domain and property dependence of $\phi$: most notably, we characterize spatial domains for which the standard lumped-model error criterion -- Biot number (based on volume-to-area length scale) small -- is deficient.

Population protocols are a model of distributed computation in which an arbitrary number of indistinguishable finite-state agents interact in pairs to decide some property of their initial configuration. We investigate the behaviour of population protocols under adversarial faults that cause agents to silently crash and no longer interact with other agents. As a starting point, we consider the property ``the number of agents exceeds a given threshold $t$'', represented by the predicate $x \geq t$, and show that the standard protocol for $x \geq t$ is very fragile: one single crash in a computation with $x:=2t-1$ agents can already cause the protocol to answer incorrectly that $x \geq t$ does not hold. However, a slightly less known protocol is robust: for any number $t' \geq t$ of agents, at least $t' - t+1$ crashes must occur for the protocol to answer that the property does not hold. We formally define robustness for arbitrary population protocols, and investigate the question whether every predicate computable by population protocols has a robust protocol. Angluin et al. proved in 2007 that population protocols decide exactly the Presburger predicates, which can be represented as Boolean combinations of threshold predicates of the form $\sum_{i=1}^n a_i \cdot x_i \geq t$ for $a_1,...,a_n, t \in \mathbb{Z}$ and modulo prdicates of the form $\sum_{i=1}^n a_i \cdot x_i \bmod m \geq t $ for $a_1, \ldots, a_n, m, t \in \mathbb{N}$. We design robust protocols for all threshold and modulo predicates. We also show that, unfortunately, the techniques in the literature that construct a protocol for a Boolean combination of predicates given protocols for the conjuncts do not preserve robustness. So the question remains open.

Survey data typically have missing values due to unit and item nonresponse. Sometimes, survey organizations know the marginal distributions of certain categorical variables in the survey. As shown in previous work, survey organizations can leverage these distributions in multiple imputation for nonignorable unit nonresponse, generating imputations that result in plausible completed-data estimates for the variables with known margins. However, this prior work does not use the design weights for unit nonrespondents; rather, it relies on a set of fabricated weights for these units. We extend this previous work to utilize the design weights for all sampled units. We illustrate the approach using simulation studies.

An iterative detection and decoding (IDD) scheme is proposed for multiuser multiple-antenna systems assisted by an active or a passive Reconfigurable Intelligent Surface (RIS). The proposed approach features an IDD strategy that incorporates Low-Density Parity-Check (LDPC) codes, RIS processing with refinements of soft information in the form of log likelihood ratios (LLRs) and truncation. Specifically, a minimum mean square error (MMSE) receive filter is used for refinement of LLRs and truncation at the RIS, and for soft interference cancellation at the receiver. An analysis of the proposed MMSE refinement is also devised along with a study of the computational complexity of the proposed and existing schemes. Simulation results demonstrate significant improvements in system capacity and bit error rate in the presence of block-fading channels

We consider the classical online bipartite matching problem in the probe-commit model. In this problem, when an online vertex arrives, its edges must be probed to determine if they exist, based on known edge probabilities. A probing algorithm must respect commitment, meaning that if a probed edge exists, it must be used in the matching. Additionally, each online vertex has a patience constraint which limits the number of probes that can be made to an online vertex's adjacent edges. We introduce a new configuration linear program (LP) which we prove is a relaxation of an optimal offline probing algorithm. Using this LP, we establish the following competitive ratios which depend on the model used to generate the instance graph, and the arrival order of its online vertices: - In the worst-case instance model, an optimal $1/e$ ratio when the vertices arrive in uniformly at random (u.a.r.) order. - In the known independently distributed (i.d.) instance model, an optimal $1/2$ ratio when the vertices arrive in adversarial order, and a $1-1/e$ ratio when the vertices arrive in u.a.r. order. The latter two results improve upon the previous best competitive ratio of $0.46$ due to Brubach et al. (Algorithmica 2020), which only held in the more restricted known i.i.d. (independent and identically distributed) instance model. Our $1-1/e$-competitive algorithm matches the best known result for the prophet secretary matching problem due to Ehsani et al. (SODA 2018). Our algorithm is efficient and implies a $1-1/e$ approximation ratio for the special case when the graph is known. This is the offline stochastic matching problem, and we improve upon the $0.42$ approximation ratio for one-sided patience due to Pollner et al. (EC 2022), while also generalizing the $1-1/e$ approximation ratio for unbounded patience due to Gamlath et al. (SODA 2019).

While there is a rich literature on robust methodologies for contamination in continuously distributed data, contamination in categorical data is largely overlooked. This is regrettable because many datasets are categorical and oftentimes suffer from contamination. Examples include inattentive responding and bot responses in questionnaires or zero-inflated count data. We propose a novel class of contamination-robust estimators of models for categorical data, coined $C$-estimators (``$C$'' for categorical). We show that the countable and possibly finite sample space of categorical data results in non-standard theoretical properties. Notably, in contrast to classic robustness theory, $C$-estimators can be simultaneously robust \textit{and} fully efficient at the postulated model. In addition, a certain particularly robust specification fails to be asymptotically Gaussian at the postulated model, but is asymptotically Gaussian in the presence of contamination. We furthermore propose a diagnostic test to identify categorical outliers and demonstrate the enhanced robustness of $C$-estimators in a simulation study.

Machine unlearning, the process of selectively removing data from trained models, is increasingly crucial for addressing privacy concerns and knowledge gaps post-deployment. Despite this importance, existing approaches are often heuristic and lack formal guarantees. In this paper, we analyze the fundamental utility, time, and space complexity trade-offs of approximate unlearning, providing rigorous certification analogous to differential privacy. For in-distribution forget data -- data similar to the retain set -- we show that a surprisingly simple and general procedure, empirical risk minimization with output perturbation, achieves tight unlearning-utility-complexity trade-offs, addressing a previous theoretical gap on the separation from unlearning "for free" via differential privacy, which inherently facilitates the removal of such data. However, such techniques fail with out-of-distribution forget data -- data significantly different from the retain set -- where unlearning time complexity can exceed that of retraining, even for a single sample. To address this, we propose a new robust and noisy gradient descent variant that provably amortizes unlearning time complexity without compromising utility.

We introduce a unified, flexible, and easy-to-implement framework of sufficient dimension reduction that can accommodate both linear and nonlinear dimension reduction, and both the conditional distribution and the conditional mean as the targets of estimation. This unified framework is achieved by a specially structured neural network -- the Belted and Ensembled Neural Network (BENN) -- that consists of a narrow latent layer, which we call the belt, and a family of transformations of the response, which we call the ensemble. By strategically placing the belt at different layers of the neural network, we can achieve linear or nonlinear sufficient dimension reduction, and by choosing the appropriate transformation families, we can achieve dimension reduction for the conditional distribution or the conditional mean. Moreover, thanks to the advantage of the neural network, the method is very fast to compute, overcoming a computation bottleneck of the traditional sufficient dimension reduction estimators, which involves the inversion of a matrix of dimension either p or n. We develop the algorithm and convergence rate of our method, compare it with existing sufficient dimension reduction methods, and apply it to two data examples.

The success of AI models relies on the availability of large, diverse, and high-quality datasets, which can be challenging to obtain due to data scarcity, privacy concerns, and high costs. Synthetic data has emerged as a promising solution by generating artificial data that mimics real-world patterns. This paper provides an overview of synthetic data research, discussing its applications, challenges, and future directions. We present empirical evidence from prior art to demonstrate its effectiveness and highlight the importance of ensuring its factuality, fidelity, and unbiasedness. We emphasize the need for responsible use of synthetic data to build more powerful, inclusive, and trustworthy language models.

Object detection typically assumes that training and test data are drawn from an identical distribution, which, however, does not always hold in practice. Such a distribution mismatch will lead to a significant performance drop. In this work, we aim to improve the cross-domain robustness of object detection. We tackle the domain shift on two levels: 1) the image-level shift, such as image style, illumination, etc, and 2) the instance-level shift, such as object appearance, size, etc. We build our approach based on the recent state-of-the-art Faster R-CNN model, and design two domain adaptation components, on image level and instance level, to reduce the domain discrepancy. The two domain adaptation components are based on H-divergence theory, and are implemented by learning a domain classifier in adversarial training manner. The domain classifiers on different levels are further reinforced with a consistency regularization to learn a domain-invariant region proposal network (RPN) in the Faster R-CNN model. We evaluate our newly proposed approach using multiple datasets including Cityscapes, KITTI, SIM10K, etc. The results demonstrate the effectiveness of our proposed approach for robust object detection in various domain shift scenarios.

北京阿比特科技有限公司