亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The classic online facility location problem deals with finding the optimal set of facilities in an online fashion when demand requests arrive one at a time and facilities need to be opened to service these requests. In this work, we study two variants of the online facility location problem; (1) weighted requests and (2) congestion. Both of these variants are motivated by their applications to real life scenarios and the previously known results on online facility location cannot be directly adapted to analyse them. Weighted requests: In this variant, each demand request is a pair $(x,w)$ where $x$ is the standard location of the demand while $w$ is the corresponding weight of the request. The cost of servicing request $(x,w)$ at facility $F$ is $w\cdot d(x,F)$. For this variant, given $n$ requests, we present an online algorithm attaining a competitive ratio of $\mathcal{O}(\log n)$ in the secretarial model for the weighted requests and show that it is optimal. Congestion: The congestion variant considers the case when there is an additional congestion cost that grows with the number of requests served by each facility. For this variant, when the congestion cost is a monomial, we show that there exists an algorithm attaining a constant competitive ratio. This constant is a function of the exponent of the monomial and the facility opening cost but independent of the number of requests.

相關內容

The classic online facility location problem deals with finding the optimal set of facilities in an online fashion when demand requests arrive one at a time and facilities need to be opened to service these requests. In this work, we study two variants of the online facility location problem; (1) weighted requests and (2) congestion. Both of these variants are motivated by their applications to real life scenarios and the previously known results on online facility location cannot be directly adapted to analyse them. Weighted requests: In this variant, each demand request is a pair $(x,w)$ where $x$ is the standard location of the demand while $w$ is the corresponding weight of the request. The cost of servicing request $(x,w)$ at facility $F$ is $w\cdot d(x,F)$. For this variant, given $n$ requests, we present an online algorithm attaining a competitive ratio of $\mathcal{O}(\log n)$ in the secretarial model for the weighted requests and show that it is optimal. Congestion: The congestion variant considers the case when there is an additional congestion cost that grows with the number of requests served by each facility. For this variant, when the congestion cost is a monomial, we show that there exists an algorithm attaining a constant competitive ratio. This constant is a function of the exponent of the monomial and the facility opening cost but independent of the number of requests.

The nonparametric estimators built by minimizing the mean squared relative error are gaining in popularity for their robustness in the presence of outliers in comparison to the Nadaraya Watson estimators. In this paper we build a relative error regression function estimator in the case of a functional explanatory variable and a left truncated and right censored scalar variable. The pointwise and uniform convergence of the estimator is proved and its performance is assessed by a numerical study in particularly the robustness which is highlighted using the influence function as a measure of robustness.

The ultimate goal of brain-computer interfaces (BCIs) based on visual modulation paradigms is to achieve high-speed performance without the burden of extensive calibration. Code-modulated visual evoked potential-based BCIs (cVEP-BCIs) modulated by broadband white noise (WN) offer various advantages, including increased communication speed, expanded encoding target capabilities, and enhanced coding flexibility. However, the complexity of the spatial-temporal patterns under broadband stimuli necessitates extensive calibration for effective target identification in cVEP-BCIs. Consequently, the information transfer rate (ITR) of cVEP-BCI under limited calibration usually stays around 100 bits per minute (bpm), significantly lagging behind state-of-the-art steady-state visual evoked potential-based BCIs (SSVEP-BCIs), which achieve rates above 200 bpm. To enhance the performance of cVEP-BCIs with minimal calibration, we devised an efficient calibration stage involving a brief single-target flickering, lasting less than a minute, to extract generalizable spatial-temporal patterns. Leveraging the calibration data, we developed two complementary methods to construct cVEP temporal patterns: the linear modeling method based on the stimulus sequence and the transfer learning techniques using cross-subject data. As a result, we achieved the highest ITR of 250 bpm under a minute of calibration, which has been shown to be comparable to the state-of-the-art SSVEP paradigms. In summary, our work significantly improved the cVEP performance under few-shot learning, which is expected to expand the practicality and usability of cVEP-BCIs.

Autoregressive Markov switching (ARMS) time series models are used to represent real-world signals whose dynamics may change over time. They have found application in many areas of the natural and social sciences, as well as in engineering. In general, inference in this kind of systems involves two problems: (a) detecting the number of distinct dynamical models that the signal may adopt and (b) estimating any unknown parameters in these models. In this paper, we introduce a class of ARMS time series models that includes many systems resulting from the discretisation of stochastic delay differential equations (DDEs). Remarkably, this class includes cases in which the discretisation time grid is not necessarily aligned with the delays of the DDE, resulting in discrete-time ARMS models with real (non-integer) delays. We describe methods for the maximum likelihood detection of the number of dynamical modes and the estimation of unknown parameters (including the possibly non-integer delays) and illustrate their application with an ARMS model of El Ni\~no--southern oscillation (ENSO) phenomenon.

In order to coordinate players in a game must first identify a target pattern of behaviour. In this paper we investigate the difficulty of identifying prominent outcomes in two kinds of binary action coordination problems in social networks: pure coordination games and anti-coordination games. For both environments, we determine the computational complexity of finding a strategy profile that (i) maximises welfare, (ii) maximises welfare subject to being an equilibrium, and (iii) maximises potential. We show that the complexity of these objectives can vary with the type of coordination problem. Objectives (i) and (iii) are tractable problems in pure coordination games, but for anti-coordination games are NP-hard. Objective (ii), finding the best Nash equilibrium, is NP-hard for both. Our results support the idea that environments in which actions are strategic complements (e.g., technology adoption) facilitate successful coordination more readily than those in which actions are strategic substitutes (e.g., public good provision).

We develop a novel multiple hypothesis testing correction with family-wise error rate (FWER) control that efficiently exploits positive dependencies between potentially correlated statistical hypothesis tests. Our proposed algorithm $\texttt{max-rank}$ is conceptually straight-forward, relying on the use of a $\max$-operator in the rank domain of computed test statistics. We compare our approach to the frequently employed Bonferroni correction, theoretically and empirically demonstrating its superiority over Bonferroni in the case of existing positive dependency, and its equivalence otherwise. Our advantage over Bonferroni increases as the number of tests rises, and we maintain high statistical power whilst ensuring FWER control. We specifically frame our algorithm in the context of parallel permutation testing, a scenario that arises in our primary application of conformal prediction, a recently popularized approach for quantifying uncertainty in complex predictive settings.

Quantum networks crucially rely on the availability of high-quality entangled pairs of qubits, known as entangled links, distributed across distant nodes. Maintaining the quality of these links is a challenging task due to the presence of time-dependent noise, also known as decoherence. Entanglement purification protocols offer a solution by converting multiple low-quality entangled states into a smaller number of higher-quality ones. In this work, we introduce a framework to analyse the performance of entanglement buffering setups that combine entanglement consumption, decoherence, and entanglement purification. We propose two key metrics: the availability, which is the steady-state probability that an entangled link is present, and the average consumed fidelity, which quantifies the steady-state quality of consumed links. We then investigate a two-node system, where each node possesses two quantum memories: one for long-term entanglement storage, and another for entanglement generation. We model this setup as a continuous-time stochastic process and derive analytical expressions for the performance metrics. Our findings unveil a trade-off between the availability and the average consumed fidelity. We also bound these performance metrics for a buffering system that employs the well-known bilocal Clifford purification protocols. Importantly, our analysis demonstrates that, in the presence of noise, consistently purifying the buffered entanglement increases the average consumed fidelity, even when some buffered entanglement is discarded due to purification failures.

How do score-based generative models (SBMs) learn the data distribution supported on a low-dimensional manifold? We investigate the score model of a trained SBM through its linear approximations and subspaces spanned by local feature vectors. During diffusion as the noise decreases, the local dimensionality increases and becomes more varied between different sample sequences. Importantly, we find that the learned vector field mixes samples by a non-conservative field within the manifold, although it denoises with normal projections as if there is an energy function in off-manifold directions. At each noise level, the subspace spanned by the local features overlap with an effective density function. These observations suggest that SBMs can flexibly mix samples with the learned score field while carefully maintaining a manifold-like structure of the data distribution.

Urban traffic congestion remains a pressing challenge in our rapidly expanding cities, despite the abundance of available data and the efforts of policymakers. By leveraging behavioral system theory and data-driven control, this paper exploits the DeePC algorithm in the context of urban traffic control performed via dynamic traffic lights. To validate our approach, we consider a high-fidelity case study using the state-of-the-art simulation software package Simulation of Urban MObility (SUMO). Preliminary results indicate that DeePC outperforms existing approaches across various key metrics, including travel time and CO$_2$ emissions, demonstrating its potential for effective traffic management

Orthogonal meta-learners, such as DR-learner, R-learner and IF-learner, are increasingly used to estimate conditional average treatment effects. They improve convergence rates relative to na\"{\i}ve meta-learners (e.g., T-, S- and X-learner) through de-biasing procedures that involve applying standard learners to specifically transformed outcome data. This leads them to disregard the possibly constrained outcome space, which can be particularly problematic for dichotomous outcomes: these typically get transformed to values that are no longer constrained to the unit interval, making it difficult for standard learners to guarantee predictions within the unit interval. To address this, we construct orthogonal meta-learners for the prediction of counterfactual outcomes which respect the outcome space. As such, the obtained i-learner or imputation-learner is more generally expected to outperform existing learners, even when the outcome is unconstrained, as we confirm empirically in simulation studies and an analysis of critical care data. Our development also sheds broader light onto the construction of orthogonal learners for other estimands.

北京阿比特科技有限公司