亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A Stackelberg duopoly model in which two firms compete to maximize their market share is considered. The firms offer a service/product to customers that are spread over several geographical regions (e.g., countries, provinces, or states). Each region has its own characteristics (spreading and recovery rates) of each service propagation. We consider that the spreading rate can be controlled by each firm and is subject to some investment that the firm does in each region. One of the main objectives of this work is to characterize the advertising budget allocation strategy for each firm across regions to maximize its market share when competing. To achieve this goal we propose a Stackelberg game model that is relatively simple while capturing the main effects of the competition for market share. {By characterizing the strong/weak Stackelberg equilibria of the game, we provide the associated budget allocation strategy.} In this setting, it is established under which conditions the solution of the game is the so-called ``winner takes all". Numerical results expand upon our theoretical findings and we provide the equilibrium characterization for an example.

相關內容

A simple, recently observed generalization of the classical Singleton bound to list-decoding asserts that rate $R$ codes are not list-decodable using list-size $L$ beyond an error fraction $\tfrac{L}{L+1} (1-R)$ (the Singleton bound being the case of $L=1$, i.e., unique decoding). We prove that in order to approach this bound for any fixed $L >1$, one needs exponential alphabets. Specifically, for every $L>1$ and $R\in(0,1)$, if a rate $R$ code can be list-of-$L$ decoded up to error fraction $\tfrac{L}{L+1} (1-R -\varepsilon)$, then its alphabet must have size at least $\exp(\Omega_{L,R}(1/\varepsilon))$. This is in sharp contrast to the situation for unique decoding where certain families of rate $R$ algebraic-geometry (AG) codes over an alphabet of size $O(1/\varepsilon^2)$ are unique-decodable up to error fraction $(1-R-\varepsilon)/2$. Our lower bound is tight up to constant factors in the exponent -- with high probability random codes (or, as shown recently, even random linear codes) over $\exp(O_L(1/\varepsilon))$-sized alphabets, can be list-of-$L$ decoded up to error fraction $\tfrac{L}{L+1} (1-R -\varepsilon)$.

Counterfactual prediction methods are required when a model will be deployed in a setting where treatment policies differ from the setting where the model was developed, or when the prediction question is explicitly counterfactual. However, estimating and evaluating counterfactual prediction models is challenging because one does not observe the full set of potential outcomes for all individuals. Here, we discuss how to tailor a model to a counterfactual estimand, how to assess the model's performance, and how to perform model and tuning parameter selection. We also provide identifiability results for measures of performance for a potentially misspecified counterfactual prediction model based on training and test data from the same (factual) source population. Last, we illustrate the methods using simulation and apply them to the task of developing a statin-na\"{i}ve risk prediction model for cardiovascular disease.

In the envy-free perfect matching problem, $n$ items with unit supply are available to be sold to $n$ buyers with unit demand. The objective is to find allocation and prices such that both seller's revenue and buyers' surpluses are maximized -- given the buyers' valuations for the items -- and all items must be sold. A previous work has shown that this problem can be solved in cubic time, using maximum weight perfect matchings to find optimal envy-free allocations and shortest paths to find optimal envy-free prices. In this work, I consider that buyers have fixed budgets, the items have quality measures and so the valuations are defined by multiplying these two quantities. Under this approach, I prove that the valuation matrix have the inverse Monge property, thus simplifying the search for optimal envy-free allocations and, consequently, for optimal envy-free prices through a strategy based on dynamic programming. As result, I propose an algorithm that finds optimal solutions in quadratic time.

We present a method for finding envy-free prices in a combinatorial auction where the consumers' number $n$ coincides with that of distinct items for sale, each consumer can buy one single item and each item has only one unit available. This is a particular case of the {\it unit-demand envy-free pricing problem}, and was recently revisited by Arbib et al. (2019). These authors proved that using a Fibonacci heap for solving the maximum weight perfect matching and the Bellman-Ford algorithm for getting the envy-free prices, the overall time complexity for solving the problem is $O(n^3)$. We propose a method based on dynamic programming design strategy that seeks the optimal envy-free prices by increasing the consumers' utilities, which has the same cubic complexity time as the aforementioned approach, but whose theoretical and empirical results indicate that our method performs faster than the shortest paths strategy, obtaining an average time reduction in determining optimal envy-free prices of approximately 48\%.

Software developers typically rely upon a large network of dependencies to build their applications. For instance, the NPM package repository contains over 3 million packages and serves tens of billions of downloads weekly. Understanding the structure and nature of packages, dependencies, and published code requires datasets that provide researchers with easy access to metadata and code of packages. However, prior work on NPM dataset construction typically has two limitations: 1) only metadata is scraped, and 2) packages or versions that are deleted from NPM can not be scraped. Over 330,000 versions of packages were deleted from NPM between July 2022 and May 2023. This data is critical for researchers as it often pertains to important questions of security and malware. We present npm-follower, a dataset and crawling architecture which archives metadata and code of all packages and versions as they are published, and is thus able to retain data which is later deleted. The dataset currently includes over 35 million versions of packages, and grows at a rate of about 1 million versions per month. The dataset is designed to be easily used by researchers answering questions involving either metadata or program analysis. Both the code and dataset are available at //dependencies.science.

Post-stratification is often used to estimate treatment effects with higher efficiency. However, the majority of existing post-stratification frameworks depend on prior knowledge of the distributions of covariates and assume that the units are classified into post-strata without error. We propose a novel method to determine a proper stratification rule by mapping the covariates into a post-stratification factor (PSF) using predictive regression models. Inspired by the bootstrap aggregating (bagging) method, we utilize the out-of-bag delete-D jackknife to estimate strata boundaries, strata weights, and the variance of the point estimate. Confidence intervals are constructed with these estimators to take into account the additional variability coming from uncertainty in the strata boundaries and weights. Extensive simulations show that our proposed method consistently improves the efficiency of the estimates when the regression models are predictive and tends to be more robust than the regression imputation method.

With the rapid development of facial forgery techniques, forgery detection has attracted more and more attention due to security concerns. Existing approaches attempt to use frequency information to mine subtle artifacts under high-quality forged faces. However, the exploitation of frequency information is coarse-grained, and more importantly, their vanilla learning process struggles to extract fine-grained forgery traces. To address this issue, we propose a progressive enhancement learning framework to exploit both the RGB and fine-grained frequency clues. Specifically, we perform a fine-grained decomposition of RGB images to completely decouple the real and fake traces in the frequency space. Subsequently, we propose a progressive enhancement learning framework based on a two-branch network, combined with self-enhancement and mutual-enhancement modules. The self-enhancement module captures the traces in different input spaces based on spatial noise enhancement and channel attention. The Mutual-enhancement module concurrently enhances RGB and frequency features by communicating in the shared spatial dimension. The progressive enhancement process facilitates the learning of discriminative features with fine-grained face forgery clues. Extensive experiments on several datasets show that our method outperforms the state-of-the-art face forgery detection methods.

Deep learning techniques have received much attention in the area of image denoising. However, there are substantial differences in the various types of deep learning methods dealing with image denoising. Specifically, discriminative learning based on deep learning can ably address the issue of Gaussian noise. Optimization models based on deep learning are effective in estimating the real noise. However, there has thus far been little related research to summarize the different deep learning techniques for image denoising. In this paper, we offer a comparative study of deep techniques in image denoising. We first classify the deep convolutional neural networks (CNNs) for additive white noisy images; the deep CNNs for real noisy images; the deep CNNs for blind denoising and the deep CNNs for hybrid noisy images, which represents the combination of noisy, blurred and low-resolution images. Then, we analyze the motivations and principles of the different types of deep learning methods. Next, we compare the state-of-the-art methods on public denoising datasets in terms of quantitative and qualitative analysis. Finally, we point out some potential challenges and directions of future research.

Although measuring held-out accuracy has been the primary approach to evaluate generalization, it often overestimates the performance of NLP models, while alternative approaches for evaluating models either focus on individual tasks or on specific behaviors. Inspired by principles of behavioral testing in software engineering, we introduce CheckList, a task-agnostic methodology for testing NLP models. CheckList includes a matrix of general linguistic capabilities and test types that facilitate comprehensive test ideation, as well as a software tool to generate a large and diverse number of test cases quickly. We illustrate the utility of CheckList with tests for three tasks, identifying critical failures in both commercial and state-of-art models. In a user study, a team responsible for a commercial sentiment analysis model found new and actionable bugs in an extensively tested model. In another user study, NLP practitioners with CheckList created twice as many tests, and found almost three times as many bugs as users without it.

Object detection typically assumes that training and test data are drawn from an identical distribution, which, however, does not always hold in practice. Such a distribution mismatch will lead to a significant performance drop. In this work, we aim to improve the cross-domain robustness of object detection. We tackle the domain shift on two levels: 1) the image-level shift, such as image style, illumination, etc, and 2) the instance-level shift, such as object appearance, size, etc. We build our approach based on the recent state-of-the-art Faster R-CNN model, and design two domain adaptation components, on image level and instance level, to reduce the domain discrepancy. The two domain adaptation components are based on H-divergence theory, and are implemented by learning a domain classifier in adversarial training manner. The domain classifiers on different levels are further reinforced with a consistency regularization to learn a domain-invariant region proposal network (RPN) in the Faster R-CNN model. We evaluate our newly proposed approach using multiple datasets including Cityscapes, KITTI, SIM10K, etc. The results demonstrate the effectiveness of our proposed approach for robust object detection in various domain shift scenarios.

北京阿比特科技有限公司