亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The Network Revenue Management (NRM) problem is a well-known challenge in dynamic decision-making under uncertainty. In this problem, fixed resources must be allocated to serve customers over a finite horizon, while customers arrive according to a stochastic process. The typical NRM model assumes that customer arrivals are independent over time. However, in this paper, we explore a more general setting where customer arrivals over different periods can be correlated. We propose a new model that assumes the existence of a system state, which determines customer arrivals for the current period. This system state evolves over time according to a time-inhomogeneous Markov chain. Our model can be used to represent correlation in various settings and synthesizes previous literature on correlation models. To solve the NRM problem under our correlated model, we derive a new linear programming (LP) approximation of the optimal policy. Our approximation provides a tighter upper bound on the total expected value collected by the optimal policy than existing upper bounds. We use our LP to develop a new bid price policy, which computes bid prices for each system state and time period in a backward induction manner. The decision is then made by comparing the reward of the customer against the associated bid prices. Our policy guarantees to collect at least $1/(1+L)$ fraction of the total reward collected by the optimal policy, where $L$ denotes the maximum number of resources required by a customer. In summary, our work presents a new model for correlated customer arrivals in the NRM problem and provides an LP approximation for solving the problem under this model. We derive a new bid price policy and provides a theoretical guarantee on the performance of the policy.

相關內容

Let $\hat\Sigma=\frac{1}{n}\sum_{i=1}^n X_i\otimes X_i$ denote the sample covariance operator of centered i.i.d. observations $X_1,\dots,X_n$ in a real separable Hilbert space, and let $\Sigma=\mathbf{E}(X_1\otimes X_1)$. The focus of this paper is to understand how well the bootstrap can approximate the distribution of the operator norm error $\sqrt n\|\hat\Sigma-\Sigma\|_{\text{op}}$, in settings where the eigenvalues of $\Sigma$ decay as $\lambda_j(\Sigma)\asymp j^{-2\beta}$ for some fixed parameter $\beta>1/2$. Our main result shows that the bootstrap can approximate the distribution of $\sqrt n\|\hat\Sigma-\Sigma\|_{\text{op}}$ at a rate of order $n^{-\frac{\beta-1/2}{2\beta+4+\epsilon}}$ with respect to the Kolmogorov metric, for any fixed $\epsilon>0$. In particular, this shows that the bootstrap can achieve near $n^{-1/2}$ rates in the regime of large $\beta$--which substantially improves on previous near $n^{-1/6}$ rates in the same regime. In addition to obtaining faster rates, our analysis leverages a fundamentally different perspective based on coordinate-free techniques. Moreover, our result holds in greater generality, and we propose a new model that is compatible with both elliptical and Mar\v{c}enko-Pastur models in high-dimensional Euclidean spaces, which may be of independent interest.

We study the fundamental problem of fairly allocating a set of indivisible goods among $n$ agents with additive valuations using the desirable fairness notion of maximin share (MMS). MMS is the most popular share-based notion, in which an agent finds an allocation fair to her if she receives goods worth at least her MMS value. An allocation is called MMS if all agents receive at least their MMS value. Since MMS allocations need not exist when $n>2$, a series of works showed the existence of approximate MMS allocations with the current best factor of $\frac34 + O(\frac{1}{n})$. However, a simple example in [DFL82, BEF21, AGST23] showed the limitations of existing approaches and proved that they cannot improve this factor to $3/4 + \Omega(1)$. In this paper, we bypass these barriers to show the existence of $(\frac{3}{4} + \frac{3}{3836})$-MMS allocations by developing new reduction rules and analysis techniques.

We introduce a new class of numerical schemes which allow for low regularity approximations to the expectation $ \mathbb{E}(|u_{k}(\tau, v^{\eta})|^2)$, where $u_k$ denotes the $k$-th Fourier coefficient of the solution $u$ of the dispersive equation and $ v^{\eta}(x) $ the associated random initial data. This quantity plays an important role in physics, in particular in the study of wave turbulence where one needs to adopt a statistical approach in order to obtain deep insight into the generic long-time behaviour of solutions to dispersive equations. Our new class of schemes is based on Wick's theorem and Feynman diagrams together with a resonance based discretisation (see arXiv:2005.01649) set in a more general context: we introduce a novel combinatorial structure called paired decorated forests which are two decorated trees whose decorations on the leaves come in pair. The character of the scheme draws its inspiration from the treatment of singular stochastic partial differential equations via Regularity Structures. In contrast to classical approaches, we do not discretize the PDE itself, but rather its expectation. This allows us to heavily exploit the optimal resonance structure and underlying gain in regularity on the finite dimensional (discrete) level.

In this paper the authors study a non-linear elliptic-parabolic system, which is motivated by mathematical models for lithium-ion batteries. One state satisfies a parabolic reaction diffusion equation and the other one an elliptic equation. The goal is to determine several scalar parameters in the coupled model in an optimal manner by utilizing a reliable reduced-order approach based on the reduced basis (RB) method. However, the states are coupled through a strongly non-linear function, and this makes the evaluation of online-efficient error estimates difficult. First the well-posedness of the system is proved. Then a Galerkin finite element and RB discretization is described for the coupled system. To certify the RB scheme hierarchical a-posteriori error estimators are utilized in an adaptive trust-region optimization method. Numerical experiments illustrate good approximation properties and efficiencies by using only a relatively small number of reduced bases functions.

The crossed random-effects model is widely used in applied statistics, finding applications in various fields such as longitudinal studies, e-commerce, and recommender systems, among others. However, these models encounter scalability challenges, as the computational time grows disproportionately with the number of data points, typically following a cubic root relationship (N^(3/2) or worse) with N. Our inspiration for addressing this issue comes from observing the recommender system employed by an online clothing retailer. Our dataset comprises over 700,000 clients, 5,000 items, and 5,000,000 measurements. When applying the maximum likelihood approach to fit crossed random effects, computational inefficiency becomes a significant concern, limiting the applicability of this approach in large-scale settings. To tackle the scalability issues, previous research by Ghosh et al. (2022a) and Ghosh et al. (2022b) has explored linear and logistic regression models utilizing fixed-effect features based on client and item variables, while incorporating random intercept terms for clients and items. In this study, we present a more generalized version of the problem, allowing random effect sizes/slopes. This extension enables us to capture the variability in effect size among both clients and items. Importantly, we have developed a scalable solution to address the aforementioned problem and have empirically demonstrated the consistency of our estimates. Specifically, as the number of data points increases, our estimates converge towards the true parameters. To validate our approach, we implement the proposed algorithm using Stitch Fix data.

In this short note, we discuss the circumstances that can lead to a failure to observe the design order of discretization error convergence in accuracy verification when solving a time-dependent problem. In particular, we discuss the problem of failing to observe the design order of spatial accuracy with an extremely small time step. The same problem is encountered even if the time step is reduced with grid refinement. These can cause a serious problem because then one would wind up trying to find a coding error that does not exist. This short note clarifies the mechanism causing this failure and provides a guide for avoiding such pitfalls

This paper considers the Cauchy problem for the nonlinear dynamic string equation of Kirchhoff-type with time-varying coefficients. The objective of this work is to develop a temporal discretization algorithm capable of approximating a solution to this initial-boundary value problem. To this end, a symmetric three-layer semi-discrete scheme is employed with respect to the temporal variable, wherein the value of a nonlinear term is evaluated at the middle node point. This approach enables the numerical solutions per temporal step to be obtained by inverting the linear operators, yielding a system of second-order linear ordinary differential equations. Local convergence of the proposed scheme is established, and it achieves quadratic convergence concerning the step size of the discretization of time on the local temporal interval. We have conducted several numerical experiments using the proposed algorithm for various test problems to validate its performance. It can be said that the obtained numerical results are in accordance with the theoretical findings.

Complex systems in science and engineering sometimes exhibit behavior that changes across different regimes. Traditional global models struggle to capture the full range of this complex behavior, limiting their ability to accurately represent the system. In response to this challenge, we propose a novel competitive learning approach for obtaining data-driven models of physical systems. The primary idea behind the proposed approach is to employ dynamic loss functions for a set of models that are trained concurrently on the data. Each model competes for each observation during training, allowing for the identification of distinct functional regimes within the dataset. To demonstrate the effectiveness of the learning approach, we coupled it with various regression methods that employ gradient-based optimizers for training. The proposed approach was tested on various problems involving model discovery and function approximation, demonstrating its ability to successfully identify functional regimes, discover true governing equations, and reduce test errors.

Digital computers implement computations using circuits, as do many naturally occurring systems (e.g., gene regulatory networks). The topology of any such circuit restricts which variables may be physically coupled during the operation of a circuit. We investigate how such restrictions on the physical coupling affects the thermodynamic costs of running the circuit. To do this we first calculate the minimal additional entropy production that arises when we run a given gate in a circuit. We then build on this calculation, to analyze how the thermodynamic costs of implementing a computation with a full circuit, comprising multiple connected gates, depends on the topology of that circuit. This analysis provides a rich new set of optimization problems that must be addressed by any designer of a circuit, if they wish to minimize thermodynamic costs.

The rapid growth of demanding applications in domains applying multimedia processing and machine learning has marked a new era for edge and cloud computing. These applications involve massive data and compute-intensive tasks, and thus, typical computing paradigms in embedded systems and data centers are stressed to meet the worldwide demand for high performance. Concurrently, the landscape of the semiconductor field in the last 15 years has constituted power as a first-class design concern. As a result, the community of computing systems is forced to find alternative design approaches to facilitate high-performance and/or power-efficient computing. Among the examined solutions, Approximate Computing has attracted an ever-increasing interest, with research works applying approximations across the entire traditional computing stack, i.e., at software, hardware, and architectural levels. Over the last decade, there is a plethora of approximation techniques in software (programs, frameworks, compilers, runtimes, languages), hardware (circuits, accelerators), and architectures (processors, memories). The current article is Part I of our comprehensive survey on Approximate Computing, and it reviews its motivation, terminology and principles, as well it classifies and presents the technical details of the state-of-the-art software and hardware approximation techniques.

北京阿比特科技有限公司