亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this research paper, we address the Distinct Elements estimation problem in the context of streaming algorithms. The problem involves estimating the number of distinct elements in a given data stream $\mathcal{A} = (a_1, a_2,\ldots, a_m)$, where $a_i \in \{1, 2, \ldots, n\}$. Over the past four decades, the Distinct Elements problem has received considerable attention, theoretically and empirically, leading to the development of space-optimal algorithms. A recent sampling-based algorithm proposed by Chakraborty et al.[11] has garnered significant interest and has even attracted the attention of renowned computer scientist Donald E. Knuth, who wrote an article on the same topic [6] and called the algorithm CVM. In this paper, we thoroughly examine the algorithms (referred to as CVM1, CVM2 in [11] and DonD, DonD' in [6]. We first unify all these algorithms and call them cutoff-based algorithms. Then we provide an approximation and biasedness analysis of these algorithms.

相關內容

In this paper we show how rule-based decision making can be combined with traditional motion planning techniques to achieve human-like behavior of a self-driving vehicle in complex traffic situations. We give and discuss examples of decision rules in autonomous driving. We draw on these examples to illustrate that developing techniques for spatial awareness of robots is an exciting activity which deserves more attention from spatial reasoning community that it had received so far.

In this paper, we tackle the problem of joint symbol level precoding (SLP) and reconfigurable intelligent surface (RIS) phase shift design with constellation rotation in the finite block length regime. We aim to increase energy efficiency by minimizing the total transmit power while satisfying the quality of service constraints. The total power consumption can be significantly minimized through the exploitation of multiuser interference by symbol level precoding and by the intelligent manipulation of the propagation environment using reconfigurable intelligent surfaces. In addition, the constellation rotation per user contributes to energy efficiency by aligning the symbol phases of the users, thus improving the utilization of constructive interference. The formulated power minimization problem is non-convex and correspondingly difficult to solve directly. Hence, we employ an alternating optimization algorithm to tackle the joint optimization of SLP and RIS phase shift design. The optimal phase of each user's constellation rotation is obtained via an exhaustive search algorithm. Through Monte-Carlo simulation results, we demonstrate that the proposed solution yields substantial power minimization as compared to conventional SLP, zero forcing precoding with RIS as well as the benchmark schemes without RIS.

How can we apply human feedback into generative model? As answer of this question, in this paper, we show the method applied on denoising problem and domain adaptation using human feedback. Deep generative models have demonstrated impressive results in image denoising. However, current image denoising models often produce inappropriate results when applied to domains different from the ones they were trained on. If there are `Good' and `Bad' result for unseen data, how to raise up quality of `Bad' result. Most methods use an approach based on generalization of model. However, these methods require target image for training or adapting unseen domain. In this paper, to adapting domain, we deal with non-target image for unseen domain, and improve specific failed image. To address this, we propose a method for fine-tuning inappropriate results generated in a different domain by utilizing human feedback. First, we train a generator to denoise images using only the noisy MNIST digit '0' images. The denoising generator trained on the source domain leads to unintended results when applied to target domain images. To achieve domain adaptation, we construct a noise-image denoising generated image data set and train a reward model predict human feedback. Finally, we fine-tune the generator on the different domain using the reward model with auxiliary loss function, aiming to transfer denoising capabilities to target domain. Our approach demonstrates the potential to efficiently fine-tune a generator trained on one domain using human feedback from another domain, thereby enhancing denoising abilities in different domains.

In this technical report, we present our findings from the research conducted on the Human-Object Interaction 4D (HOI4D) dataset for egocentric action segmentation task. As a relatively novel research area, point cloud video methods might not be good at temporal modeling, especially for long point cloud videos (\eg, 150 frames). In contrast, traditional video understanding methods have been well developed. Their effectiveness on temporal modeling has been widely verified on many large scale video datasets. Therefore, we convert point cloud videos into depth videos and employ traditional video modeling methods to improve 4D action segmentation. By ensembling depth and point cloud video methods, the accuracy is significantly improved. The proposed method, named Mixture of Depth and Point cloud video experts (DPMix), achieved the first place in the 4D Action Segmentation Track of the HOI4D Challenge 2023.

In this paper we investigate the Curry-Howard correspondence for constructive modal logic in light of the gap between the proof equivalences enforced by the lambda calculi from the literature and by the recently defined winning strategies for this logic. We define a new lambda-calculus for a minimal constructive modal logic by enriching the calculus from the literature with additional reduction rules and we prove normalization and confluence for our calculus. We then provide a typing system in the style of focused proof systems allowing us to provide a unique proof for each term in normal form, and we use this result to show a one-to-one correspondence between terms in normal form and winning innocent strategies.

In this study, we focus on the development and implementation of a comprehensive ensemble of numerical time series forecasting models, collectively referred to as the Group of Numerical Time Series Prediction Model (G-NM). This inclusive set comprises traditional models such as Autoregressive Integrated Moving Average (ARIMA), Holt-Winters' method, and Support Vector Regression (SVR), in addition to modern neural network models including Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM). G-NM is explicitly constructed to augment our predictive capabilities related to patterns and trends inherent in complex natural phenomena. By utilizing time series data relevant to these events, G-NM facilitates the prediction of such phenomena over extended periods. The primary objective of this research is to both advance our understanding of such occurrences and to significantly enhance the accuracy of our forecasts. G-NM encapsulates both linear and non-linear dependencies, seasonalities, and trends present in time series data. Each of these models contributes distinct strengths, from ARIMA's resilience in handling linear trends and seasonality, SVR's proficiency in capturing non-linear patterns, to LSTM's adaptability in modeling various components of time series data. Through the exploitation of the G-NM potential, we strive to advance the state-of-the-art in large-scale time series forecasting models. We anticipate that this research will represent a significant stepping stone in our ongoing endeavor to comprehend and forecast the complex events that constitute the natural world.

In this paper, we study two popular variants of Graph Coloring -- Dominator Coloring and CD Coloring. In both problems, we are given a graph $G$ and a natural number $\ell$ as input and the goal is to properly color the vertices with at most $\ell$ colors with specific constraints. In Dominator Coloring, we require for each $v \in V(G)$, a color $c$ such that $v$ dominates all vertices colored $c$. In CD Coloring, we require for each color $c$, a $v \in V(G)$ which dominates all vertices colored $c$. These problems, defined due to their applications in social and genetic networks, have been studied extensively in the last 15 years. While it is known that both problems are fixed-parameter tractable (FPT) when parameterized by $(t,\ell)$ where $t$ is the treewidth of $G$, we consider strictly structural parameterizations which naturally arise out of the problems' applications. We prove that Dominator Coloring is FPT when parameterized by the size of a graph's cluster vertex deletion (CVD) set and that CD Coloring is FPT parameterized by CVD set size plus the number of remaining cliques. En route, we design a simpler and faster FPT algorithms when the problems are parameterized by the size of a graph's twin cover, a special CVD set. When the parameter is the size of a graph's clique modulator, we design a randomized single-exponential time algorithm for the problems. These algorithms use an inclusion-exclusion based polynomial sieving technique and add to the growing number of applications using this powerful algebraic technique.

In this paper, we consider the coupled N/TH problem, in which the termination criterion for the neutronics iteration adopts an adaptive tolerance with respect to the fuel temperature residual at each Picard iteration. We refer to this coupling scheme as the inexact Picard iteration method. Fourier analysis is performed to investigate how the convergence behavior of Picard iteration is influenced by the inexact neutronics solution. It is found that if the convergence of the inner neutronics iteration is slow, Picard coupling may become unstable unless a tighter tolerance is used for the neutronics iteration. Nevertheless, our analysis indicates that a certain amount of over-solving is necessary for maintaining the stability of Picard iteration if the iterative solution of the subproblem is not fast enough. However, this issue has not been addressed in the previous studies.

In this work, we introduce the notion of Context-Based Prediction Models. A Context-Based Prediction Model determines the probability of a user's action (such as a click or a conversion) solely by relying on user and contextual features, without considering any specific features of the item itself. We have identified numerous valuable applications for this modeling approach, including training an auxiliary context-based model to estimate click probability and incorporating its prediction as a feature in CTR prediction models. Our experiments indicate that this enhancement brings significant improvements in offline and online business metrics while having minimal impact on the cost of serving. Overall, our work offers a simple and scalable, yet powerful approach for enhancing the performance of large-scale commercial recommender systems, with broad implications for the field of personalized recommendations.

In this paper, we present a comprehensive review of the imbalance problems in object detection. To analyze the problems in a systematic manner, we introduce a problem-based taxonomy. Following this taxonomy, we discuss each problem in depth and present a unifying yet critical perspective on the solutions in the literature. In addition, we identify major open issues regarding the existing imbalance problems as well as imbalance problems that have not been discussed before. Moreover, in order to keep our review up to date, we provide an accompanying webpage which catalogs papers addressing imbalance problems, according to our problem-based taxonomy. Researchers can track newer studies on this webpage available at: //github.com/kemaloksuz/ObjectDetectionImbalance .

北京阿比特科技有限公司