亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In the online bin packing problem, a sequence of items is revealed one at a time, and each item must be packed into an available bin instantly upon its arrival. In this paper, we revisit the problem under a setting where the total number of items T is known in advance, also known as the closed online bin packing problem. Specifically, we study both the stochastic model and the random permutation model. We develop and analyze an adaptive algorithm that solves an offline bin packing problem at geometric time intervals and uses the offline optimal solution to guide online packing decisions. Under both models, we show that the algorithm achieves C\sqrt{T} regret (in terms of the number of used bins) compared to the hindsight optimal solution, where C is a universal constant (<= 13) that bears no dependence on the underlying distribution or the item sizes. The result shows the lower bound barrier of \Omega(\sqrt{T \log T}) in Shor (1986) can be broken with solely the knowledge of the horizon T, but without a need of knowing the underlying distribution. As to the algorithm analysis, we develop a new approach to analyzing the packing dynamic using the notion of exchangeable random variables. The approach creates a symmetrization between the offline solution and the online solution, and it is used to analyze both the algorithm performance and various benchmarks related to the bin packing problem. For the latter one, our analysis provides an alternative (probably simpler) treatment and tightens the analysis of the asymptotic benchmark of the stochastic bin packing problem in Rhee and Talagrand (1989a,b). As the analysis only relies on a symmetry between the offline and online problems, the algorithm and benchmark analyses can be naturally extended from the stochastic model to the random permutation model.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · · 近似 · 情景 · 路徑 ·
2022 年 2 月 7 日

We study ROUND-UFP and ROUND-SAP, two generalizations of the classical BIN PACKING problem that correspond to the unsplittable flow problem on a path (UFP) and the storage allocation problem (SAP), respectively. We are given a path with capacities on its edges and a set of tasks where for each task we are given a demand and a subpath. In ROUND-UFP, the goal is to find a packing of all tasks into a minimum number of copies (rounds) of the given path such that for each copy, the total demand of tasks on any edge does not exceed the capacity of the respective edge. In ROUND-SAP, the tasks are considered to be rectangles and the goal is to find a non-overlapping packing of these rectangles into a minimum number of rounds such that all rectangles lie completely below the capacity profile of the edges. We show that in contrast to BIN PACKING, both the problems do not admit an asymptotic polynomial-time approximation scheme (APTAS), even when all edge capacities are equal. However, for this setting, we obtain asymptotic $(2+\varepsilon)$-approximations for both problems. For the general case, we obtain an $O(\log\log n)$-approximation algorithm and an $O(\log\log\frac{1}{\delta})$-approximation under $(1+\delta)$-resource augmentation for both problems. For the intermediate setting of the no bottleneck assumption (i.e., the maximum task demand is at most the minimum edge capacity), we obtain absolute $12$- and asymptotic $(16+\varepsilon)$-approximation algorithms for ROUND-UFP and ROUND-SAP, respectively.

In this paper, I consider a fine-grained dichotomy of Boolean counting constraint satisfaction problem (#CSP), under the exponential time hypothesis of counting version (#ETH). Suppose $\mathscr{F}$ is a finite set of algebraic complex-valued functions defined on Boolean domain. When $\mathscr{F}$ is a subset of either two special function sets, I prove that #CSP($\mathscr{F}$) is polynomial-time solvable, otherwise it can not be computed in sub-exponential time unless #ETH fails. I also improve the result by proving the same dichotomy holds for #CSP with bounded degree (every variable appears at most constant constraints), even for #R$_3$-CSP. An important preparation before proving the result is to argue that pinning (two special unary functions $[1,0]$ and $[0,1]$ are used to reduce arity) can also keep the sub-exponential lower bound of a Boolean #CSP problem. I discuss this issue by utilizing some common methods in proving #P-hardness of counting problems. The proof illustrates the internal correlation among these commonly used methods.

In this paper, we are interested in the performance of a variable-length stop-feedback (VLSF) code with $m$ optimal decoding times for the binary-input additive white Gaussian noise channel. We first develop tight approximations on the tail probability of length-$n$ cumulative information density. Building on the work of Yavas \emph{et al.}, for a given information density threshold, we formulate the integer program of minimizing the upper bound on average blocklength over all decoding times subject to the average error probability, minimum gap and integer constraints. Eventually, minimization of locally minimum upper bounds over all thresholds will yield the globally minimum upper bound and this is called the two-step minimization. For the integer program, we present a greedy algorithm that yields possibly suboptimal integer decoding times. By allowing a positive real-valued decoding time, we develop the gap-constrained sequential differential optimization (SDO) procedure that sequentially produces the optimal, real-valued decoding times. We identify the error regime in which Polyanskiy's scheme of stopping at zero does not improve the achievability bound. In this error regime, the two-step minimization with the gap-constrained SDO shows that a finite $m$ suffices to attain Polyanskiy's bound for VLSF codes with $m = \infty$.

Particle smoothers are SMC (Sequential Monte Carlo) algorithms designed to approximate the joint distribution of the states given observations from a state-space model. We propose dSMC (de-Sequentialized Monte Carlo), a new particle smoother that is able to process $T$ observations in $\mathcal{O}(\log T)$ time on parallel architecture. This compares favourably with standard particle smoothers, the complexity of which is linear in $T$. We derive $\mathcal{L}_p$ convergence results for dSMC, with an explicit upper bound, polynomial in $T$. We then discuss how to reduce the variance of the smoothing estimates computed by dSMC by (i) designing good proposal distributions for sampling the particles at the initialization of the algorithm, as well as by (ii) using lazy resampling to increase the number of particles used in dSMC. Finally, we design a particle Gibbs sampler based on dSMC, which is able to perform parameter inference in a state-space model at a $\mathcal{O}(\log(T))$ cost on parallel hardware.

In the online simple knapsack problem items are presented in an iterative fashion and an algorithm has to decide for each item whether to reject or permanently include it into the knapsack without any knowledge about the rest of the instance. The goal is to pack the knapsack as full as possible. In this work, we introduce the option of reserving items for the cost of a fixed fraction $\alpha$ of their size. An algorithm may pay this fraction in order to postpone its decision on whether to include or reject these items until after the last item of the instance was presented. While the classical online simple knapsack problem does not admit any constantly bounded competitive ratio in the deterministic setting, we find that adding the possibility of reservation makes the problem constantly competitive. We give tight bounds for the whole range of $\alpha$ from $0$ to $1$.

The $P_2$-packing problem asks for whether a graph contains $k$ vertex-disjoint paths each of length two. We continue the study of its kernelization algorithms, and develop a $5k$-vertex kernel.

PAC-Bayesian is an analysis framework where the training error can be expressed as the weighted average of the hypotheses in the posterior distribution whilst incorporating the prior knowledge. In addition to being a pure generalization bound analysis tool, PAC-Bayesian bound can also be incorporated into an objective function to train a probabilistic neural network, making them a powerful and relevant framework that can numerically provide a tight generalization bound for supervised learning. For simplicity, we call probabilistic neural network learned using training objectives derived from PAC-Bayesian bounds as {\it PAC-Bayesian learning}. Despite their empirical success, the theoretical analysis of PAC-Bayesian learning for neural networks is rarely explored. This paper proposes a new class of convergence and generalization analysis for PAC-Bayes learning when it is used to train the over-parameterized neural networks by the gradient descent method. For a wide probabilistic neural network, we show that when PAC-Bayes learning is applied, the convergence result corresponds to solving a kernel ridge regression when the probabilistic neural tangent kernel (PNTK) is used as its kernel. Based on this finding, we further characterize the uniform PAC-Bayesian generalization bound which improves over the Rademacher complexity-based bound for non-probabilistic neural network. Finally, drawing the insight from our theoretical results, we propose a proxy measure for efficient hyperparameters selection, which is proven to be time-saving.

We study the problem of learning in the stochastic shortest path (SSP) setting, where an agent seeks to minimize the expected cost accumulated before reaching a goal state. We design a novel model-based algorithm EB-SSP that carefully skews the empirical transitions and perturbs the empirical costs with an exploration bonus to guarantee both optimism and convergence of the associated value iteration scheme. We prove that EB-SSP achieves the minimax regret rate $\widetilde{O}(B_{\star} \sqrt{S A K})$, where $K$ is the number of episodes, $S$ is the number of states, $A$ is the number of actions and $B_{\star}$ bounds the expected cumulative cost of the optimal policy from any state, thus closing the gap with the lower bound. Interestingly, EB-SSP obtains this result while being parameter-free, i.e., it does not require any prior knowledge of $B_{\star}$, nor of $T_{\star}$ which bounds the expected time-to-goal of the optimal policy from any state. Furthermore, we illustrate various cases (e.g., positive costs, or general costs when an order-accurate estimate of $T_{\star}$ is available) where the regret only contains a logarithmic dependence on $T_{\star}$, thus yielding the first horizon-free regret bound beyond the finite-horizon MDP setting.

We consider the exploration-exploitation trade-off in reinforcement learning and we show that an agent imbued with a risk-seeking utility function is able to explore efficiently, as measured by regret. The parameter that controls how risk-seeking the agent is can be optimized exactly, or annealed according to a schedule. We call the resulting algorithm K-learning and show that the corresponding K-values are optimistic for the expected Q-values at each state-action pair. The K-values induce a natural Boltzmann exploration policy for which the `temperature' parameter is equal to the risk-seeking parameter. This policy achieves an expected regret bound of $\tilde O(L^{3/2} \sqrt{S A T})$, where $L$ is the time horizon, $S$ is the number of states, $A$ is the number of actions, and $T$ is the total number of elapsed time-steps. This bound is only a factor of $L$ larger than the established lower bound. K-learning can be interpreted as mirror descent in the policy space, and it is similar to other well-known methods in the literature, including Q-learning, soft-Q-learning, and maximum entropy policy gradient, and is closely related to optimism and count based exploration methods. K-learning is simple to implement, as it only requires adding a bonus to the reward at each state-action and then solving a Bellman equation. We conclude with a numerical example demonstrating that K-learning is competitive with other state-of-the-art algorithms in practice.

This work considers the problem of provably optimal reinforcement learning for episodic finite horizon MDPs, i.e. how an agent learns to maximize his/her long term reward in an uncertain environment. The main contribution is in providing a novel algorithm --- Variance-reduced Upper Confidence Q-learning (vUCQ) --- which enjoys a regret bound of $\widetilde{O}(\sqrt{HSAT} + H^5SA)$, where the $T$ is the number of time steps the agent acts in the MDP, $S$ is the number of states, $A$ is the number of actions, and $H$ is the (episodic) horizon time. This is the first regret bound that is both sub-linear in the model size and asymptotically optimal. The algorithm is sub-linear in that the time to achieve $\epsilon$-average regret for any constant $\epsilon$ is $O(SA)$, which is a number of samples that is far less than that required to learn any non-trivial estimate of the transition model (the transition model is specified by $O(S^2A)$ parameters). The importance of sub-linear algorithms is largely the motivation for algorithms such as $Q$-learning and other "model free" approaches. vUCQ algorithm also enjoys minimax optimal regret in the long run, matching the $\Omega(\sqrt{HSAT})$ lower bound. Variance-reduced Upper Confidence Q-learning (vUCQ) is a successive refinement method in which the algorithm reduces the variance in $Q$-value estimates and couples this estimation scheme with an upper confidence based algorithm. Technically, the coupling of both of these techniques is what leads to the algorithm enjoying both the sub-linear regret property and the asymptotically optimal regret.

北京阿比特科技有限公司