亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A garden $G$ is populated by $n\ge 1$ bamboos $b_1, b_2, ..., b_n$ with the respective daily growth rates $h_1 \ge h_2 \ge \dots \ge h_n$. It is assumed that the initial heights of bamboos are zero. The robotic gardener maintaining the garden regularly attends bamboos and trims them to height zero according to some schedule. The Bamboo Garden Trimming Problem (BGT) is to design a perpetual schedule of cuts to maintain the elevation of the bamboo garden as low as possible. The bamboo garden is a metaphor for a collection of machines which have to be serviced, with different frequencies, by a robot which can service only one machine at a time. The objective is to design a perpetual schedule of servicing which minimizes the maximum (weighted) waiting time for servicing. We consider two variants of BGT. In discrete BGT the robot trims only one bamboo at the end of each day. In continuous BGT the bamboos can be cut at any time, however, the robot needs time to move from one bamboo to the next. For discrete BGT, we show tighter approximation algorithms for the case when the growth rates are balanced and for the general case. The former algorithm settles one of the conjectures about the Pinwheel problem. The general approximation algorithm improves on the previous best approximation ratio. For continuous BGT, we propose approximation algorithms which achieve approximation ratios $O(\log \lceil h_1/h_n\rceil)$ and $O(\log n)$.

相關內容

We introduce the extremal range, a local statistic for studying the spatial extent of extreme events in random fields on $\mathbb{R}^2$. Conditioned on exceedance of a high threshold at a location $s$, the extremal range at $s$ is the random variable defined as the smallest distance from $s$ to a location where there is a non-exceedance. We leverage tools from excursion-set theory to study distributional properties of the extremal range, propose parametric models and predict the median extremal range at extreme threshold levels. The extremal range captures the rate at which the spatial extent of conditional extreme events scales for increasingly high thresholds, and we relate its distributional properties with the bivariate tail dependence coefficient and the extremal index of time series in classical Extreme-Value Theory. Consistent estimation of the distribution function of the extremal range for stationary random fields is proven. For non-stationary random fields, we implement generalized additive median regression to predict extremal-range maps at very high threshold levels. An application to two large daily temperature datasets, namely reanalyses and climate-model simulations for France, highlights decreasing extremal dependence for increasing threshold levels and reveals strong differences in joint tail decay rates between reanalyses and simulations.

The Unitary Synthesis Problem (Aaronson-Kuperberg 2007) asks whether any $n$-qubit unitary $U$ can be implemented by an efficient quantum algorithm $A$ augmented with an oracle that computes an arbitrary Boolean function $f$. In other words, can the task of implementing any unitary be efficiently reduced to the task of implementing any Boolean function? In this work, we prove a one-query lower bound for unitary synthesis. We show that there exist unitaries $U$ such that no quantum polynomial-time oracle algorithm $A^f$ can implement $U$, even approximately, if it only makes one (quantum) query to $f$. Our approach also has implications for quantum cryptography: we prove (relative to a random oracle) the existence of quantum cryptographic primitives that remain secure against all one-query adversaries $A^{f}$. Since such one-query algorithms can decide any language, solve any classical search problem, and even prepare any quantum state, our result suggests that implementing random unitaries and breaking quantum cryptography may be harder than all of these tasks. To prove this result, we formulate unitary synthesis as an efficient challenger-adversary game, which enables proving lower bounds by analyzing the maximum success probability of an adversary $A^f$. Our main technical insight is to identify a natural spectral relaxation of the one-query optimization problem, which we bound using tools from random matrix theory. We view our framework as a potential avenue to rule out polynomial-query unitary synthesis, and we state conjectures in this direction.

The stability of an approximating sequence $(A_n)$ for an operator $A$ usually requires, besides invertibility of $A$, the invertibility of further operators, say $B, C, \dots$, that are well-associated to the sequence $(A_n)$. We study this set, $\{A,B,C,\dots\}$, of so-called stability indicators of $(A_n)$ and connect it to the asymptotics of $\|A_n\|$, $\|A_n^{-1}\|$ and $\kappa(A_n)=\|A_n\|\|A_n^{-1}\|$ as well as to spectral pollution by showing that $\limsup {\rm Spec}_\varepsilon A_n= {\rm Spec}_\varepsilon A\cup{\rm Spec}_\varepsilon B\cup{\rm Spec}_\varepsilon C\cup\dots$. We further specify, for each of $\|A_n\|$, $\|A_n^{-1}\|$, $\kappa(A_n)$ and ${\rm Spec}_\varepsilon A_n$, under which conditions even convergence applies.

This study proposes an interpretable neural network-based non-proportional odds model (N$^3$POM) for ordinal regression. N$^3$POM is different from conventional approaches to ordinal regression with non-proportional models in several ways: (1) N$^3$POM is designed to directly handle continuous responses, whereas standard methods typically treat de facto ordered continuous variables as discrete, (2) instead of estimating response-dependent finite coefficients of linear models from discrete responses as is done in conventional approaches, we train a non-linear neural network to serve as a coefficient function. Thanks to the neural network, N$^3$POM offers flexibility while preserving the interpretability of conventional ordinal regression. We establish a sufficient condition under which the predicted conditional cumulative probability locally satisfies the monotonicity constraint over a user-specified region in the covariate space. Additionally, we provide a monotonicity-preserving stochastic (MPS) algorithm for effectively training the neural network. We apply N$^3$POM to several real-world datasets.

This paper presents a new achievable scheme for coded caching systems with $\mathsf{N}$ files, $\mathsf{K}=\mathsf{N}$ users, and cache size $\mathsf{M}=1/(\mathsf{N}-1)$. The scheme employs linear coding during the cache placement phase, and a three-stage transmissions designed to eliminate interference in the delivery phase. The achievable load meets a known converse bound, which impose no constraint on the cache placement, and is thus optimal. This new result, together with known inner and outer bounds, shows optimality of linear coding placement for $\mathsf{M} \leq 1/(\mathsf{N}-1)$ when $\mathsf{K}=\mathsf{N}\geq 3$. Interestingly and surprisingly, the proposed scheme is relatively simple but requires operations on a finite field of size at least 3.

We study the problem of adaptive variable selection in a Gaussian white noise model of intensity $\varepsilon$ under certain sparsity and regularity conditions on an unknown regression function $f$. The $d$-variate regression function $f$ is assumed to be a sum of functions each depending on a smaller number $k$ of variables ($1 \leq k \leq d$). These functions are unknown to us and only few of them are non-zero. We assume that $d=d_\varepsilon \to \infty$ as $\varepsilon \to 0$ and consider the cases when $k$ is fixed and when $k=k_\varepsilon \to \infty$ and $k=o(d)$ as $\varepsilon \to 0$. In this work, we introduce an adaptive selection procedure that, under some model assumptions, identifies exactly all non-zero $k$-variate components of $f$. In addition, we establish conditions under which exact identification of the non-zero components is impossible. These conditions ensure that the proposed selection procedure is the best possible in the asymptotically minimax sense with respect to the Hamming risk.

A vertex set $L\subseteq V$ is liar's vertex-edge dominating set of a graph $G=(V,E)$ if for every $e_i\in E$, $|N_G[e_i]\cap L|\geq 2$ and for every pair of distinct edges $e_i$ and $e_j$, $|(N_G[e_i]\cup N_G[e_j])\cap L|\geq 3$. In this paper, we introduce the notion of liar's vertex-edge domination which arise naturally from some application in communication network. Given a graph $G$, the \textsc{Minimum Liar's Vertex-Edge Domination Problem} (\textsc{MinLVEDP}) asks to find a minimum liar's vertex-edge dominating set of $G$ of minimum cardinality. We have studied this problem from algorithmic point of view. We show that \textsc{MinLVEDP} can be solved in linear time for trees, whereas the decision version of this problem is NP-complete for general graphs. We further study approximation algorithms for this problem. We propose an $O(\ln \Delta(G))$-approximation algorithm for \textsc{MinLVEDP} in general graphs, where $\Delta(G)$ is the maximum degree of the input graph. On the negative side, we show that the \textsc{MinLVEDP} cannot be approximated within $\frac{1}{2}(\frac{1}{8}-\epsilon)\ln|V|$ for any $\epsilon >0$, unless $NP\subseteq DTIME(|V|^{O(\log(\log|V|)})$.

We consider several basic questions on distributed routing in directed graphs with multiple additive costs, or metrics, and multiple constraints. Distributed routing in this sense is used in several protocols, such as IS-IS and OSPF. A practical approach to the multi-constraint routing problem is to, first, combine the metrics into a single `composite' metric, and then apply one-to-all shortest path algorithms, e.g. Dijkstra, in order to find shortest path trees. We show that, in general, even if a feasible path exists and is known for every source and destination pair, it is impossible to guarantee a distributed routing under several constraints. We also study the question of choosing the optimal `composite' metric. We show that under certain mathematical assumptions we can efficiently find a convex combination of several metrics that maximizes the number of discovered feasible paths. Sometimes it can be done analytically, and is in general possible using what we call a 'smart iterative approach'. We illustrate these findings by extensive experiments on several typical network topologies.

Given a function $f: [a,b] \to \mathbb{R}$, if $f(a)<0$ and $f(b)>0$ and $f$ is continuous, the Intermediate Value Theorem implies that $f$ has a root in $[a,b]$. Moreover, given a value-oracle for $f$, an approximate root of $f$ can be computed using the bisection method, and the number of required evaluations is polynomial in the number of accuracy digits. The goal of this paper is to identify conditions under which this polynomiality result extends to a multi-dimensional function that satisfies the conditions of Miranda's theorem -- the natural multi-dimensional extension of the Intermediate Value Theorem. In general, finding an approximate root of $f$ might require an exponential number of evaluations even for a two-dimensional function. We show that, if $f$ is two-dimensional, and at least one component of $f$ is monotone, an approximate root of $f$ can be found using a polynomial number of evalutaions. This result has applications for computing an approximately envy-free cake-cutting among three groups.

We improve the previously best known upper bounds on the sizes of $\theta$-spherical codes for every $\theta<\theta^*\approx 62.997^{\circ}$ at least by a factor of $0.4325$, in sufficiently high dimensions. Furthermore, for sphere packing densities in dimensions $n\geq 2000$ we have an improvement at least by a factor of $0.4325+\frac{51}{n}$. Our method also breaks many non-numerical sphere packing density bounds in smaller dimensions. This is the first such improvement for each dimension since the work of Kabatyanskii and Levenshtein~\cite{KL} and its later improvement by Levenshtein~\cite{Leven79}. Novelties of this paper include the analysis of triple correlations, usage of the concentration of mass in high dimensions, and the study of the spacings between the roots of Jacobi polynomials.

北京阿比特科技有限公司