亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Understanding crop diversity is crucial for resilience in farming, ecosystem services, and effective agro-environmental policies. We utilize a novel EU-wide satellite product (2018, 10 m resolution) to assess crop diversity across different scales. We define local crop diversity ($\alpha$-diversity) at 1 km scale, which in the EU is proportional to the area covered by large farms or clusters of small-to-medium sized farms. We also compute $\gamma$-diversity, covering landscape, regional, and national levels crop diversity. $\beta$-diversity ($\gamma$/$\alpha$) provides a measure of between agroecosystems diversity. National $\alpha$, $\gamma$, and $\beta$ diversity varies greatly ($\alpha$: 2.1-3.9, $\gamma$: 3.5-7.5, $\beta$: 1.22-2.27). EU-wide $\gamma$-diversity increases logarithmically with spatial aggregation (1 km: 2.85, 100 km: 4.27). We categorize EU Member States (MS) into four groups for crop diversification policy recommendations. Compared to the USA, the EU exhibits higher diversity related to differences in farm structure and practices. High local $\alpha$-diversity is only found for MS with small farms (<25 ha), but their presence doesn't always guarantee high local diversity. This study aids CAP implementation in the EU, with potential for annual continental Copernicus crop type maps and ecosystem co-variates exploration for a deeper understanding of agro-ecosystem services.

相關內容

This paper presents an iterative detection and decoding scheme along with an adaptive strategy to improve the selection of access points (APs) in a grant-free uplink cell-free scenario. With the requirement for the APs to have low-computational power in mind, we introduce a low-complexity scheme for local activity and data detection. At the central processing unit (CPU) level, we propose an adaptive technique based on local log-likelihood ratios (LLRs) to select the list of APs that should be considered for each device. Simulation results show that the proposed LLRs-based APs selection scheme outperforms the existing techniques in the literature in terms of bit error rate (BER) while requiring comparable fronthaul load.

signSGD is popular in nonconvex optimization due to its communication efficiency. Yet, existing analyses of signSGD rely on assuming that data are sampled with replacement in each iteration, contradicting the practical implementation where data are randomly reshuffled and sequentially fed into the algorithm. We bridge this gap by proving the first convergence result of signSGD with random reshuffling (SignRR) for nonconvex optimization. Given the dataset size $n$, the number of epochs of data passes $T$, and the variance bound of a stochastic gradient $\sigma^2$, we show that SignRR has the same convergence rate $O(\log(nT)/\sqrt{nT} + \|\sigma\|_1)$ as signSGD \citep{bernstein2018signsgd}. We then present SignRVR and SignRVM, which leverage variance-reduced gradients and momentum updates respectively, both converging at $O(\log (nT)/\sqrt{nT} + \log (nT)\sqrt{n}/\sqrt{T})$. In contrast with the analysis of signSGD, our results do not require an extremely large batch size in each iteration to be of the same order as the total number of iterations \citep{bernstein2018signsgd} or the signs of stochastic and true gradients match element-wise with a minimum probability of 1/2 \citep{safaryan2021stochastic}. We also extend our algorithms to cases where data are distributed across different machines, yielding dist-SignRVR and dist-SignRVM, both converging at $O(\log (n_0T)/\sqrt{n_0T} + \log (n_0T)\sqrt{n_0}/\sqrt{T})$, where $n_0$ is the dataset size of a single machine. We back up our theoretical findings through experiments on simulated and real-world problems, verifying that randomly reshuffled sign methods match or surpass existing baselines.

With the climate change context, many prospective studies, generally encompassing all areas of society, imagine possible futures to expand the range of options. The role of digital technologies within these possible futures is rarely specifically targeted. Which digital technologies and methodologies do these studies envision in a world that has mitigated and adapted to climate change? In this paper, we propose a typology for scenarios to survey digital technologies and their applications in 14 prospective studies and their corresponding 35 future scenarios. Our finding is that all the scenarios consider digital technology to be present in the future. We observe that only a few of them question our relationship with digital technology and all aspects related to its materiality, and none of the general studies envision breakthroughs concerning technologies used today. Our result demonstrates the lack of a systemic view of information and communication technologies. We therefore argue for new prospective studies to envision the future of ICT.

The advancement in healthcare has shifted focus toward patient-centric approaches, particularly in self-care and patient education, facilitated by access to Electronic Health Records (EHR). However, medical jargon in EHRs poses significant challenges in patient comprehension. To address this, we introduce a new task of automatically generating lay definitions, aiming to simplify complex medical terms into patient-friendly lay language. We first created the README dataset, an extensive collection of over 20,000 unique medical terms and 300,000 mentions, each offering context-aware lay definitions manually annotated by domain experts. We have also engineered a data-centric Human-AI pipeline that synergizes data filtering, augmentation, and selection to improve data quality. We then used README as the training data for models and leveraged a Retrieval-Augmented Generation (RAG) method to reduce hallucinations and improve the quality of model outputs. Our extensive automatic and human evaluations demonstrate that open-source mobile-friendly models, when fine-tuned with high-quality data, are capable of matching or even surpassing the performance of state-of-the-art closed-source large language models like ChatGPT. This research represents a significant stride in closing the knowledge gap in patient education and advancing patient-centric healthcare solutions

Private network deployment is gaining momentum in warehouses, retail, automation, health care, and many such use cases to guarantee mission-critical services with less latency. Guaranteeing the delay-sensitive application in Wi-Fi is always challenging due to the nature of unlicensed spectrum. As the device ecosystem keeps growing and expanding, all the current and future devices can support both Wi-Fi and Private Cellular Network (CBRS is the primary spectrum in the US for private network deployment). However, due to the existing infrastructure and huge investment in the dense Wi-Fi network, consumers prefer two deployment models. The first scenario is deploying the private network outdoors and using the existing Wi-Fi indoors. The second scenario is to use the existing Wi-Fi network as a backup for offloading the traffic indoors and parallely utilizes the private network deployment for less latency applications. Hence, we expect, in both scenarios, a roaming between two technologies \emph{i.e.,} Wi-Fi and Private Cellular Network. In this work, we would like to quantify the roaming performance or service interruption time when the device moves from Wi-Fi to Private Network (CBRS) and vice-versa.

Scientific claims gain credibility by replicability, especially if replication under different circumstances and varying designs yields equivalent results. Aggregating results over multiple studies is, however, not straightforward, and when the heterogeneity between studies increases, conventional methods such as (Bayesian) meta-analysis and Bayesian sequential updating become infeasible. *Bayesian Evidence Synthesis*, built upon the foundations of the Bayes factor, allows to aggregate support for conceptually similar hypotheses over studies, regardless of methodological differences. We assess the performance of Bayesian Evidence Synthesis over multiple effect and sample sizes, with a broad set of (inequality-constrained) hypotheses using Monte Carlo simulations, focusing explicitly on the complexity of the hypotheses under consideration. The simulations show that this method can evaluate complex (informative) hypotheses regardless of methodological differences between studies, and performs adequately if the set of studies considered has sufficient statistical power. Additionally, we pinpoint challenging conditions that can lead to unsatisfactory results, and provide suggestions on handling these situations. Ultimately, we show that Bayesian Evidence Synthesis is a promising tool that can be used when traditional research synthesis methods are not applicable due to insurmountable between-study heterogeneity.

The existence of representative datasets is a prerequisite of many successful artificial intelligence and machine learning models. However, the subsequent application of these models often involves scenarios that are inadequately represented in the data used for training. The reasons for this are manifold and range from time and cost constraints to ethical considerations. As a consequence, the reliable use of these models, especially in safety-critical applications, is a huge challenge. Leveraging additional, already existing sources of knowledge is key to overcome the limitations of purely data-driven approaches, and eventually to increase the generalization capability of these models. Furthermore, predictions that conform with knowledge are crucial for making trustworthy and safe decisions even in underrepresented scenarios. This work provides an overview of existing techniques and methods in the literature that combine data-based models with existing knowledge. The identified approaches are structured according to the categories integration, extraction and conformity. Special attention is given to applications in the field of autonomous driving.

Conventional entity typing approaches are based on independent classification paradigms, which make them difficult to recognize inter-dependent, long-tailed and fine-grained entity types. In this paper, we argue that the implicitly entailed extrinsic and intrinsic dependencies between labels can provide critical knowledge to tackle the above challenges. To this end, we propose \emph{Label Reasoning Network(LRN)}, which sequentially reasons fine-grained entity labels by discovering and exploiting label dependencies knowledge entailed in the data. Specifically, LRN utilizes an auto-regressive network to conduct deductive reasoning and a bipartite attribute graph to conduct inductive reasoning between labels, which can effectively model, learn and reason complex label dependencies in a sequence-to-set, end-to-end manner. Experiments show that LRN achieves the state-of-the-art performance on standard ultra fine-grained entity typing benchmarks, and can also resolve the long tail label problem effectively.

Defensive deception is a promising approach for cyberdefense. Although defensive deception is increasingly popular in the research community, there has not been a systematic investigation of its key components, the underlying principles, and its tradeoffs in various problem settings. This survey paper focuses on defensive deception research centered on game theory and machine learning, since these are prominent families of artificial intelligence approaches that are widely employed in defensive deception. This paper brings forth insights, lessons, and limitations from prior work. It closes with an outline of some research directions to tackle major gaps in current defensive deception research.

Spectral clustering is a leading and popular technique in unsupervised data analysis. Two of its major limitations are scalability and generalization of the spectral embedding (i.e., out-of-sample-extension). In this paper we introduce a deep learning approach to spectral clustering that overcomes the above shortcomings. Our network, which we call SpectralNet, learns a map that embeds input data points into the eigenspace of their associated graph Laplacian matrix and subsequently clusters them. We train SpectralNet using a procedure that involves constrained stochastic optimization. Stochastic optimization allows it to scale to large datasets, while the constraints, which are implemented using a special-purpose output layer, allow us to keep the network output orthogonal. Moreover, the map learned by SpectralNet naturally generalizes the spectral embedding to unseen data points. To further improve the quality of the clustering, we replace the standard pairwise Gaussian affinities with affinities leaned from unlabeled data using a Siamese network. Additional improvement can be achieved by applying the network to code representations produced, e.g., by standard autoencoders. Our end-to-end learning procedure is fully unsupervised. In addition, we apply VC dimension theory to derive a lower bound on the size of SpectralNet. State-of-the-art clustering results are reported on the Reuters dataset. Our implementation is publicly available at //github.com/kstant0725/SpectralNet .

北京阿比特科技有限公司