亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Using order-level data from Uber Technologies, we study how the COVID-19 pandemic and the ensuing shutdown of businesses in the United States in 2020 affected small business restaurant supply and demand on the Uber Eats platform. We find evidence that small restaurants experience significant increases in activity on the platform following the closure of the dine-in channel. We document how locality- and restaurant-specific characteristics moderate the size of the increase in activity through the digital channel and explain how these increases may be due to both demand- and supply-side shock. We observe an increase in the intensity of competitive effects following the economic shock and show that growth in the number of providers on a platform induces both market expansion and heightened inter-provider competition. Higher platform activity in response to the shock does not only have short-run implications: restaurants with larger demand shocks had a higher on-platform survival rate one year after the lockdown, suggesting that the platform channel contributes towards long-run resilience following a crisis. Our findings document the heterogeneous effects of platforms during the pandemic, underscore the critical role that digital technologies play in enabling business resilience in the economy, and provide insight into how platforms can manage competing incentives when balancing market expansion and growth goals with the competitive interests of their incumbent providers.

相關內容

How can citizens moderate hate, toxicity, and extremism in online discourse? We analyze a large corpus of more than 130,000 discussions on German Twitter over the turbulent four years marked by the migrant crisis and political upheavals. With a help of human annotators, language models, machine learning classifiers, and longitudinal statistical analyses, we discern the dynamics of different dimensions of discourse. We find that expressing simple opinions, not necessarily supported by facts but also without insults, relates to the least hate, toxicity, and extremity of speech and speakers in subsequent discussions. Sarcasm also helps in achieving those outcomes, in particular in the presence of organized extreme groups. More constructive comments such as providing facts or exposing contradictions can backfire and attract more extremity. Mentioning either outgroups or ingroups is typically related to a deterioration of discourse in the long run. A pronounced emotional tone, either negative such as anger or fear, or positive such as enthusiasm and pride, also leads to worse outcomes. Going beyond one-shot analyses on smaller samples of discourse, our findings have implications for the successful management of online commons through collective civic moderation.

At present, millions of Ethereum smart contracts are created per year and attract financially motivated attackers. However, existing analyzers do not meet the need to precisely analyze the financial security of large numbers of contracts. In this paper, we propose and implement FASVERIF, an automated analyzer for fine-grained analysis of smart contracts' financial security. On the one hand, FASVERIF automatically generates models to be verified against security properties of smart contracts. On the other hand, our analyzer automatically generates the security properties, which is different from existing formal verifiers for smart contracts. As a result, FASVERIF can automatically process source code of smart contracts, and uses formal methods whenever possible to simultaneously maximize its accuracy. We evaluate FASVERIF on a vulnerabilities dataset by comparing it with other automatic tools. Our evaluation shows that FASVERIF greatly outperforms the representative tools using different technologies, with respect to accuracy and coverage of types of vulnerabilities.

This study is a survey of digital library initiatives in India collecting secondary information from about fifty digital libraries from their respective websites. The findings show that in most cases the actual conception of the digital library is still in a nascent stage. Online subscriptions and links to third-party websites are also considered digital libraries. However, many digital libraries do have not any proper search interface on their respective website due to improper arrangement of metadata. In some cases, they do not have their own digitized collection and provided other collections or referred to their users to some third-party website. Moreover, there are many digital libraries that cannot be accessed outside (remote access) of the organization. Hence, regular website maintenance, remote access facility, and proper training of information professionals are required. Moreover, the so-called digital libraries in India have not developed their own standards or are not following any global standards. However, the usage statistics for government digital libraries are far better than the usage statistics of academic or public libraries. Users are perhaps more interested in government rules, laws, orders, etc. That is perhaps a positive sign of digital governance and reaching the public. There are several important observations and policy suggestions that may be helpful for students, scholars, library professionals, and the decision-makers in the government.

This paper considers estimating functional-coefficient models in panel quantile regression with individual effects, allowing the cross-sectional and temporal dependence for large panel observations. A latent group structure is imposed on the heterogenous quantile regression models so that the number of nonparametric functional coefficients to be estimated can be reduced considerably. With the preliminary local linear quantile estimates of the subject-specific functional coefficients, a classic agglomerative clustering algorithm is used to estimate the unknown group structure and an easy-to-implement ratio criterion is proposed to determine the group number. The estimated group number and structure are shown to be consistent. Furthermore, a post-grouping local linear smoothing method is introduced to estimate the group-specific functional coefficients, and the relevant asymptotic normal distribution theory is derived with a normalisation rate comparable to that in the literature. The developed methodologies and theory are verified through a simulation study and showcased with an application to house price data from UK local authority districts, which reveals different homogeneity structures at different quantile levels.

Web 3.0 pursues the establishment of decentralized ecosystems based on blockchain technologies to drive the digital transformation of physical commerce and governance. Through consensus algorithms and smart contracts in blockchain, which are based on cryptography technologies, digital identity, digital asset management, decentralized autonomous organization, and decentralized finance are realized for secure and transparent digital economy services in Web 3.0 for promoting the integration of digital and physical economies. With the rapid realization of quantum devices, Web 3.0 is being developed in parallel with the deployment of quantum cloud computing and quantum Internet. In this regard, quantum computing first disrupts the original cryptographic systems that protect data security while reshaping modern cryptography with the advantages of quantum computing and communication. Therefore, this survey provides a comprehensive overview of blockchain-based Web 3.0 and its quantum and post-quantum enhancement from the ambilateral perspective. On the one hand, some post-quantum migration methods, and anti-quantum signatures offer potential ways to achieve unforgeable security under quantum attack for the internal technologies of blockchain. On the other hand, some quantum/post-quantum encryption and verification algorithms improve the external performance of the blockchain, enabling a decentralized, valuable, secure blockchain system. Finally, we discuss the future directions toward developing a provable secure decentralized digital ecosystem.

The ability to generate synthetic sequences is crucial for a wide range of applications, and recent advances in deep learning architectures and generative frameworks have greatly facilitated this process. Particularly, unconditional one-shot generative models constitute an attractive line of research that focuses on capturing the internal information of a single image or video to generate samples with similar contents. Since many of those one-shot models are shifting toward efficient non-deep and non-adversarial approaches, we examine the versatility of a one-shot generative model for augmenting whole datasets. In this work, we focus on how similarity at the subsequence level affects similarity at the sequence level, and derive bounds on the optimal transport of real and generated sequences based on that of corresponding subsequences. We use a one-shot generative model to sample from the vicinity of individual sequences and generate subsequence-similar ones and demonstrate the improvement of this approach by applying it to the problem of Unmanned Aerial Vehicle (UAV) identification using limited radio-frequency (RF) signals. In the context of UAV identification, RF fingerprinting is an effective method for distinguishing legitimate devices from malicious ones, but heterogenous environments and channel impairments can impose data scarcity and affect the performance of classification models. By using subsequence similarity to augment sequences of RF data with a low ratio (5%-20%) of training dataset, we achieve significant improvements in performance metrics such as accuracy, precision, recall, and F1 score.

Estimation of unsteady flow fields around flight vehicles may improve flow interactions and lead to enhanced vehicle performance. Although flow-field representations can be very high-dimensional, their dynamics can have low-order representations and may be estimated using a few, appropriately placed measurements. This paper presents a sensor-selection framework for the intended application of data-driven, flow-field estimation. This framework combines data-driven modeling, steady-state Kalman Filter design, and a sparsification technique for sequential selection of sensors. This paper also uses the sensor selection framework to design sensor arrays that can perform well across a variety of operating conditions. Flow estimation results on numerical data show that the proposed framework produces arrays that are highly effective at flow-field estimation for the flow behind and an airfoil at a high angle of attack using embedded pressure sensors. Analysis of the flow fields reveals that paths of impinging stagnation points along the airfoil's surface during a shedding period of the flow are highly informative locations for placement of pressure sensors.

In recent years, industry leaders and researchers have proposed to use technical provenance standards to address visual misinformation spread through digitally altered media. By adding immutable and secure provenance information such as authorship and edit date to media metadata, social media users could potentially better assess the validity of the media they encounter. However, it is unclear how end users would respond to provenance information, or how to best design provenance indicators to be understandable to laypeople. We conducted an online experiment with 595 participants from the US and UK to investigate how provenance information altered users' accuracy perceptions and trust in visual content shared on social media. We found that provenance information often lowered trust and caused users to doubt deceptive media, particularly when it revealed that the media was composited. We additionally tested conditions where the provenance information itself was shown to be incomplete or invalid, and found that these states have a significant impact on participants' accuracy perceptions and trust in media, leading them, in some cases, to disbelieve honest media. Our findings show that provenance, although enlightening, is still not a concept well-understood by users, who confuse media credibility with the orthogonal (albeit related) concept of provenance credibility. We discuss how design choices may contribute to provenance (mis)understanding, and conclude with implications for usable provenance systems, including clearer interfaces and user education.

Liou-Steffen splitting (AUSM) schemes are popular for low Mach number simulations, however, like many numerical schemes for compressible flow they require careful modification to accurately resolve convective features in this regime. Previous analyses of these schemes usually focus only on a single discrete scheme at the convective limit, only considering flow with acoustic effects empirically, if at all. In our recent paper Hope-Collins & di Mare, 2023 we derived constraints on the artificial diffusion scaling of low Mach number schemes for flows both with and without acoustic effects, and applied this analysis to Roe-type finite-volume schemes. In this paper we form approximate diffusion matrices for the Liou-Steffen splitting, as well as the closely related Zha-Bilgen and Toro-Vasquez splittings. We use the constraints found in Hope-Collins & di Mare, 2023 to derive and analyse the required scaling of each splitting at low Mach number. By transforming the diffusion matrices to the entropy variables we can identify erroneous diffusion terms compared to the ideal form used in Hope-Collins & di Mare, 2023. These terms vanish asymptotically for the Liou-Steffen splitting, but result in spurious entropy generation for the Zha-Bilgen and Toro-Vasquez splittings unless a particular form of the interface pressure is used. Numerical examples for acoustic and convective flow verify the results of the analysis, and show the importance of considering the resolution of the entropy field when assessing schemes of this type.

Federated learning (FL) is an emerging, privacy-preserving machine learning paradigm, drawing tremendous attention in both academia and industry. A unique characteristic of FL is heterogeneity, which resides in the various hardware specifications and dynamic states across the participating devices. Theoretically, heterogeneity can exert a huge influence on the FL training process, e.g., causing a device unavailable for training or unable to upload its model updates. Unfortunately, these impacts have never been systematically studied and quantified in existing FL literature. In this paper, we carry out the first empirical study to characterize the impacts of heterogeneity in FL. We collect large-scale data from 136k smartphones that can faithfully reflect heterogeneity in real-world settings. We also build a heterogeneity-aware FL platform that complies with the standard FL protocol but with heterogeneity in consideration. Based on the data and the platform, we conduct extensive experiments to compare the performance of state-of-the-art FL algorithms under heterogeneity-aware and heterogeneity-unaware settings. Results show that heterogeneity causes non-trivial performance degradation in FL, including up to 9.2% accuracy drop, 2.32x lengthened training time, and undermined fairness. Furthermore, we analyze potential impact factors and find that device failure and participant bias are two potential factors for performance degradation. Our study provides insightful implications for FL practitioners. On the one hand, our findings suggest that FL algorithm designers consider necessary heterogeneity during the evaluation. On the other hand, our findings urge system providers to design specific mechanisms to mitigate the impacts of heterogeneity.

北京阿比特科技有限公司