亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The \emph{$ p$-processor cup game} is a classic and widely studied scheduling problem that captures the setting in which a $p$-processor machine must assign tasks to processors over time in order to ensure that no individual task ever falls too far behind. The problem is formalized as a multi-round game in which two players, a filler (who assigns work to tasks) and an emptier (who schedules tasks) compete. The emptier's goal is to minimize backlog, which is the maximum amount of outstanding work for any task. Recently, Kuszmaul and Westover (ITCS, 2021) proposed the \emph{variable-processor cup game}, which considers the same problem, except that the amount of resources available to the players (i.e., the number $p$ of processors) fluctuates between rounds of the game. They showed that this seemingly small modification fundamentally changes the dynamics of the game: whereas the optimal backlog in the fixed $p$-processor game is $\Theta(\log n)$, independent of $p$, the optimal backlog in the variable-processor game is $\Theta(n)$. The latter result was only known to apply to games with \emph{exponentially many} rounds, however, and it has remained an open question what the optimal tradeoff between time and backlog is for shorter games. This paper establishes a tight trade-off curve between time and backlog in the variable-processor cup game. Importantly, we prove that for a game consisting of $t$ rounds, the optimal backlog is $\Theta(n)$ if and only if $t \ge \Omega(n^3)$. Our techniques also allow for us to resolve several other open questions concerning how the variable-processor cup game behaves in beyond-worst-case-analysis settings.

相關內容

In a completely randomized experiment, the variances of treatment effect estimators in the finite population are usually not identifiable and hence not estimable. Although some estimable bounds of the variances have been established in the literature, few of them are derived in the presence of covariates. In this paper, the difference-in-means estimator and the Wald estimator are considered in the completely randomized experiment with perfect compliance and noncompliance, respectively. Sharp bounds for the variances of these two estimators are established when covariates are available. Furthermore, consistent estimators for such bounds are obtained, which can be used to shorten the confidence intervals and improve the power of tests. Confidence intervals are constructed based on the consistent estimators of the upper bounds, whose coverage rates are uniformly asymptotically guaranteed. Simulations were conducted to evaluate the proposed methods. The proposed methods are also illustrated with two real data analyses.

We analyze the finite element discretization of distributed elliptic optimal control problems with variable energy regularization, where the usual $L^2(\Omega)$ norm regularization term with a constant regularization parameter $\varrho$ is replaced by a suitable representation of the energy norm in $H^{-1}(\Omega)$ involving a variable, mesh-dependent regularization parameter $\varrho(x)$. It turns out that the error between the computed finite element state $\widetilde{u}_{\varrho h}$ and the desired state $\bar{u}$ (target) is optimal in the $L^2(\Omega)$ norm provided that $\varrho(x)$ behaves like the local mesh size squared. This is especially important when adaptive meshes are used in order to approximate discontinuous target functions. The adaptive scheme can be driven by the computable and localizable error norm $\| \widetilde{u}_{\varrho h} - \bar{u}\|_{L^2(\Omega)}$ between the finite element state $\widetilde{u}_{\varrho h}$ and the target $\bar{u}$. The numerical results not only illustrate our theoretical findings, but also show that the iterative solvers for the discretized reduced optimality system are very efficient and robust.

The Strong Exponential Time Hypothesis (SETH) asserts that for every $\varepsilon>0$ there exists $k$ such that $k$-SAT requires time $(2-\varepsilon)^n$. The field of fine-grained complexity has leveraged SETH to prove quite tight conditional lower bounds for dozens of problems in various domains and complexity classes, including Edit Distance, Graph Diameter, Hitting Set, Independent Set, and Orthogonal Vectors. Yet, it has been repeatedly asked in the literature whether SETH-hardness results can be proven for other fundamental problems such as Hamiltonian Path, Independent Set, Chromatic Number, MAX-$k$-SAT, and Set Cover. In this paper, we show that fine-grained reductions implying even $\lambda^n$-hardness of these problems from SETH for any $\lambda>1$, would imply new circuit lower bounds: super-linear lower bounds for Boolean series-parallel circuits or polynomial lower bounds for arithmetic circuits (each of which is a four-decade open question). We also extend this barrier result to the class of parameterized problems. Namely, for every $\lambda>1$ we conditionally rule out fine-grained reductions implying SETH-based lower bounds of $\lambda^k$ for a number of problems parameterized by the solution size $k$. Our main technical tool is a new concept called polynomial formulations. In particular, we show that many problems can be represented by relatively succinct low-degree polynomials, and that any problem with such a representation cannot be proven SETH-hard (without proving new circuit lower bounds).

The amoebot model abstracts active programmable matter as a collection of simple computational elements called amoebots that interact locally to collectively achieve tasks of coordination and movement. Since its introduction at SPAA 2014, a growing body of literature has adapted its assumptions for a variety of problems; however, without a standardized hierarchy of assumptions, precise systematic comparison of results under the amoebot model is difficult. We propose the canonical amoebot model, an updated formalization that distinguishes between core model features and families of assumption variants. A key improvement addressed by the canonical amoebot model is concurrency. Much of the existing literature implicitly assumes amoebot actions are isolated and reliable, reducing analysis to the sequential setting where at most one amoebot is active at a time. However, real programmable matter systems are concurrent. The canonical amoebot model formalizes all amoebot communication as message passing, leveraging adversarial activation models of concurrent executions. Under this granular treatment of time, we take two complementary approaches to concurrent algorithm design. We first establish a set of sufficient conditions for algorithm correctness under any concurrent execution, embedding concurrency control directly in algorithm design. We then present a concurrency control framework that uses locks to convert amoebot algorithms that terminate in the sequential setting and satisfy certain conventions into algorithms that exhibit equivalent behavior in the concurrent setting. As a case study, we demonstrate both approaches using a simple algorithm for hexagon formation. Together, the canonical amoebot model and these complementary approaches to concurrent algorithm design open new directions for distributed computing research on programmable matter.

We develop a modified online mirror descent framework that is suitable for building adaptive and parameter-free algorithms in unbounded domains. We leverage this technique to develop the first unconstrained online linear optimization algorithm achieving an optimal dynamic regret bound, and we further demonstrate that natural strategies based on Follow-the-Regularized-Leader are unable to achieve similar results. We also apply our mirror descent framework to build new parameter-free implicit updates, as well as a simplified and improved unconstrained scale-free algorithm.

This paper deals with the grouped variable selection problem. A widely used strategy is to augment the negative log-likelihood function with a sparsity-promoting penalty. Existing methods include the group Lasso, group SCAD, and group MCP. The group Lasso solves a convex optimization problem but is plagued by underestimation bias. The group SCAD and group MCP avoid this estimation bias but require solving a nonconvex optimization problem that may be plagued by suboptimal local optima. In this work, we propose an alternative method based on the generalized minimax concave (GMC) penalty, which is a folded concave penalty that maintains the convexity of the objective function. We develop a new method for grouped variable selection in linear regression, the group GMC, that generalizes the strategy of the original GMC estimator. We present an efficient algorithm for computing the group GMC estimator and also prove properties of the solution path to guide its numerical computation and tuning parameter selection in practice. We establish error bounds for both the group GMC and original GMC estimators. A rich set of simulation studies and a real data application indicate that the proposed group GMC approach outperforms existing methods in several different aspects under a wide array of scenarios.

Modeling and control of high-dimensional, nonlinear robotic systems remains a challenging task. While various model- and learning-based approaches have been proposed to address these challenges, they broadly lack generalizability to different control tasks and rarely preserve the structure of the dynamics. In this work, we propose a new, data-driven approach for extracting low-dimensional models from data using Spectral Submanifold Reduction (SSMR). In contrast to other data-driven methods which fit dynamical models to training trajectories, we identify the dynamics on generic, low-dimensional attractors embedded in the full phase space of the robotic system. This allows us to obtain computationally-tractable models for control which preserve the system's dominant dynamics and better track trajectories radically different from the training data. We demonstrate the superior performance and generalizability of SSMR in dynamic trajectory tracking tasks vis-a-vis the state of the art, including Koopman operator-based approaches.

The significant presence of demand charges in electric bills motivates large-load customers to utilize energy storage to reduce the peak procurement from the grid. We herein study the problem of energy storage allocation for peak minimization, under the online setting where irrevocable decisions are sequentially made without knowing future demands. The problem is uniquely challenging due to (i) the coupling of online decisions across time imposed by the inventory constraints and (ii) the noncumulative nature of the peak procurement. We apply the CR-Pursuit framework and address the challenges unique to our minimization problem to design an online algorithm achieving the optimal competitive ratio (CR) among all online algorithms. We show that the optimal CR can be computed in polynomial time by solving a linear number of linear-fractional problems. More importantly, we generalize our approach to develop an \emph{anytime-optimal} online algorithm that achieves the best possible CR at any epoch, given the inputs and online decisions so far. The algorithm retains the optimal worst-case performance and attains adaptive average-case performance. Trace-driven simulations show that our algorithm can decrease the peak demand by an extra 19% compared to baseline alternatives under typical settings.

This work considers Gaussian process interpolation with a periodized version of the Mat{\'e}rn covariance function (Stein, 1999, Section 6.7) with Fourier coefficients $\phi$($\alpha$^2 + j^2)^(--$\nu$--1/2). Convergence rates are studied for the joint maximum likelihood estimation of $\nu$ and $\phi$ when the data is sampled according to the model. The mean integrated squared error is also analyzed with fixed and estimated parameters, showing that maximum likelihood estimation yields asymptotically the same error as if the ground truth was known. Finally, the case where the observed function is a ''deterministic'' element of a continuous Sobolev space is also considered, suggesting that bounding assumptions on some parameters can lead to different estimates.

The weakly compressible smoothed particle hydrodynamics (WCSPH) method has been employed to simulate various physical phenomena involving fluids and solids. Various methods have been proposed to implement the solid wall, and inlet/outlet and other boundary conditions. However, error estimation and the formal rates of convergence for these methods have not been discussed or examined carefully. In this paper, we use the method of manufactured solution (MMS) to verify the convergence properties of a variety of commonly employed of various solid, inlet, and outlet boundary implementations. In order to perform this study, we propose various manufactured solutions for different domains. On the basis of the convergence offered by these methods, we systematically propose a convergent WCSPH scheme along with suitable methods for implementing the boundary conditions. Along with other recent developments in the use of adaptive resolution, this paves the way for accurate and efficient simulation of incompressible or weakly-compressible fluid flows using the SPH method.

北京阿比特科技有限公司