亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Dynamic programming (DP) is one of the fundamental paradigms in algorithm design. However, many DP algorithms have to fill in large DP tables, represented by two-dimensional arrays, which causes at least quadratic running times and space usages. This has led to the development of improved algorithms for special cases when the DPs satisfy additional properties like, e.g., the Monge property or total monotonicity. In this paper, we consider a new condition which assumes (among some other technical assumptions) that the rows of the DP table are monotone. Under this assumption, we introduce a novel data structure for computing $(1+\varepsilon)$-approximate DP solutions in near-linear time and space in the static setting, and with polylogarithmic update times when the DP entries change dynamically. To the best of our knowledge, our new condition is incomparable to previous conditions and is the first which allows to derive dynamic algorithms based on existing DPs. Instead of using two-dimensional arrays to store the DP tables, we store the rows of the DP tables using monotone piecewise constant functions. This allows us to store length-$n$ DP table rows with entries in $[0,W]$ using only polylog$(n,W)$ bits, and to perform operations, such as $(\min,+)$-convolution or rounding, on these functions in polylogarithmic time. We further present several applications of our data structure. For bicriteria versions of $k$-balanced graph partitioning and simultaneous source location, we obtain the first dynamic algorithms with subpolynomial update times, as well as the first static algorithms using only near-linear time and space. Additionally, we obtain the currently fastest algorithm for fully dynamic knapsack.

相關內容

In conventional dual-function radar-communication (DFRC) systems, the radar and communication channels are routinely estimated at fixed time intervals based on their worst-case operation scenarios. Such situation-agnostic repeated estimations cause significant training overhead and dramatically degrade the system performance, especially for applications with dynamic sensing/communication demands and limited radio resources. In this paper, we leverage the channel aging characteristics to reduce training overhead and to design a situation-dependent channel re-estimation interval optimization-based resource allocation for performance improvement in a multi-target tracking DFRC system. Specifically, we exploit the channel temporal correlation to predict radar and communication channels for reducing the need of training preamble retransmission. Then, we characterize the channel aging effects on the Cramer-Rao lower bounds (CRLBs) for radar tracking performance analysis and achievable rates with maximum ratio transmission (MRT) and zero-forcing (ZF) transmit beamforming for communication performance analysis. In particular, the aged CRLBs and achievable rates are derived as closed-form expressions with respect to the channel aging time, bandwidth, and power. Based on the analyzed results, we optimize these factors to maximize the average total aged achievable rate subject to individual target tracking precision demand, communication rate requirement, and other practical constraints. Since the formulated problem belongs to a non-convex problem, we develop an efficient one-dimensional search based optimization algorithm to obtain its suboptimal solutions. Finally, simulation results are presented to validate the correctness of the derived theoretical results and the effectiveness of the proposed allocation scheme.

Machine Learning (ML) has widely been used for modeling and predicting physical systems. These techniques offer high expressive power and good generalizability for interpolation within observed data sets. However, the disadvantage of black-box models is that they underperform under blind conditions since no physical knowledge is incorporated. Physics-based ML aims to address this problem by retaining the mathematical flexibility of ML techniques while incorporating physics. In accord, this paper proposes to embed mechanics-based models into the mean function of a Gaussian Process (GP) model and characterize potential discrepancies through kernel machines. A specific class of kernel function is promoted, which has a connection with the gradient of the physics-based model with respect to the input and parameters and shares similarity with the exact Autocovariance function of linear dynamical systems. The spectral properties of the kernel function enable considering dominant periodic processes originating from physics misspecification. Nevertheless, the stationarity of the kernel function is a difficult hurdle in the sequential processing of long data sets, resolved through hierarchical Bayesian techniques. This implementation is also advantageous to mitigate computational costs, alleviating the scalability of GPs when dealing with sequential data. Using numerical and experimental examples, potential applications of the proposed method to structural dynamics inverse problems are demonstrated.

A lower bound is an important tool for predicting the performance that an estimator can achieve under a particular statistical model. Bayesian bounds are a kind of such bounds which not only utilizes the observation statistics but also includes the prior model information. In reality, however, the true model generating the data is either unknown or simplified when deriving estimators, which motivates the works to derive estimation bounds under modeling mismatch situations. This paper provides a derivation of a Bayesian Cram\'{e}r-Rao bound under model misspecification, defining important concepts such as pseudotrue parameter that were not clearly identified in previous works. The general result is particularized in linear and Gaussian problems, where closed-forms are available and results are used to validate the results.

In many developing nations, a lack of poverty data prevents critical humanitarian organizations from responding to large-scale crises. Currently, socioeconomic surveys are the only method implemented on a large scale for organizations and researchers to measure and track poverty. However, the inability to collect survey data efficiently and inexpensively leads to significant temporal gaps in poverty data; these gaps severely limit the ability of organizational entities to address poverty at its root cause. We propose a transfer learning model based on surface temperature change and remote sensing data to extract features useful for predicting poverty rates. Machine learning, supported by data sources of poverty indicators, has the potential to estimate poverty rates accurately and within strict time constraints. Higher temperatures, as a result of climate change, have caused numerous agricultural obstacles, socioeconomic issues, and environmental disruptions, trapping families in developing countries in cycles of poverty. To find patterns of poverty relating to temperature that have the highest influence on spatial poverty rates, we use remote sensing data. The two-step transfer model predicts the temperature delta from high resolution satellite imagery and then extracts image features useful for predicting poverty. The resulting model achieved 80% accuracy on temperature prediction. This method takes advantage of abundant satellite and temperature data to measure poverty in a manner comparable to the existing survey methods and exceeds similar models of poverty prediction.

In this paper, we show that in a parallel processing system, if a directed acyclic graph (DAG) can be induced in the state space and execution is \textit{enforced} along that DAG, then synchronization cost can be eliminated. Specifically, we show that in such systems, correctness is preserved even if the nodes execute asynchronously and rely on old/inconsistent information of other nodes. We present two variations for inducing DAGs -- \textit{DAG-inducing problems}, where the problem definition itself induces a DAG, and \textit{DAG-inducing algorithms}, where a DAG is induced by the algorithm. We demonstrate that the dominant clique (DC) problem and shortest path (SP) problem are DAG-inducing problems. Among these, DC allows self-stabilization, whereas the algorithm that we present for SP does not. We demonstrate that maximal matching (MM) and 2-approximation vertex cover (VC) are not DAG-inducing problems. However, DAG-inducing algorithms can be developed for them. Among these, the algorithm for MM allows self-stabilization and the 2-approx. algorithm for VC does not. Our algorithm for MM converges in $2n$ moves and does not require a synchronous environment, which is an improvement over the existing algorithms in the literature. Algorithms for DC, SP and 2-approx. VC converge in $2m$, $2m$ and $n$ moves respectively. We also note that DAG-inducing problems are more general than, and encapsulate, lattice linear problems (Garg, SPAA 2020). Similarly, DAG-inducing algorithms encapsulate lattice linear algorithms (Gupta and Kulkarni, SSS 2022).

We study the problem of planning restless multi-armed bandits (RMABs) with multiple actions. This is a popular model for multi-agent systems with applications like multi-channel communication, monitoring and machine maintenance tasks, and healthcare. Whittle index policies, which are based on Lagrangian relaxations, are widely used in these settings due to their simplicity and near-optimality under certain conditions. In this work, we first show that Whittle index policies can fail in simple and practically relevant RMAB settings, even when the RMABs are indexable. We discuss why the optimality guarantees fail and why asymptotic optimality may not translate well to practically relevant planning horizons. We then propose an alternate planning algorithm based on the mean-field method, which can provably and efficiently obtain near-optimal policies with a large number of arms, without the stringent structural assumptions required by the Whittle index policies. This borrows ideas from existing research with some improvements: our approach is hyper-parameter free, and we provide an improved non-asymptotic analysis which has: (a) no requirement for exogenous hyper-parameters and tighter polynomial dependence on known problem parameters; (b) high probability bounds which show that the reward of the policy is reliable; and (c) matching sub-optimality lower bounds for this algorithm with respect to the number of arms, thus demonstrating the tightness of our bounds. Our extensive experimental analysis shows that the mean-field approach matches or outperforms other baselines.

This paper presents a dynamic logic $d\mathcal{L}_\text{CHP}$ for compositional deductive verification of communicating hybrid programs (CHPs). CHPs go beyond the traditional mixed discrete and continuous dynamics of hybrid systems by adding CSP-style operators for communication and parallelism. A compositional proof calculus is presented that modularly verifies CHPs including their parallel compositions from proofs of their subprograms by assumption-commitment reasoning in dynamic logic. Unlike Hoare-style assumption-commitments, $d\mathcal{L}_\text{CHP}$ supports intuitive symbolic execution via explicit recorder variables for communication primitives. Since $d\mathcal{L}_\text{CHP}$ is a conservative extension of differential dynamic logic $d\mathcal{L}$, it can be used soundly along with the $d\mathcal{L}$ proof calculus and $d\mathcal{L}$'s complete axiomatization for differential equation invariants.

This paper introduces a new framework of algebraic equivalence relations between time series and new distance metrics between them, then applies these to investigate the Australian ``Black Summer'' bushfire season of 2019-2020. First, we introduce a general framework for defining equivalence between time series, heuristically intended to be equivalent if they differ only up to noise. Our first specific implementation is based on using change point algorithms and comparing statistical quantities such as mean or variance in stationary segments. We thus derive the existence of such equivalence relations on the space of time series, such that the quotient spaces can be equipped with a metrizable topology. Next, we illustrate specifically how to define and compute such distances among a collection of time series and perform clustering and additional analysis thereon. Then, we apply these insights to analyze air quality data across New South Wales, Australia, during the 2019-2020 bushfires. There, we investigate structural similarity with respect to this data and identify locations that were impacted anonymously by the fires relative to their location. This may have implications regarding the appropriate management of resources to avoid gaps in the defense against future fires.

Many problems arising in control require the determination of a mathematical model of the application. This has often to be performed starting from input-output data, leading to a task known as system identification in the engineering literature. One emerging topic in this field is estimation of networks consisting of several interconnected dynamic systems. We consider the linear setting assuming that system outputs are the result of many correlated inputs, hence making system identification severely ill-conditioned. This is a scenario often encountered when modeling complex cybernetics systems composed by many sub-units with feedback and algebraic loops. We develop a strategy cast in a Bayesian regularization framework where any impulse response is seen as realization of a zero-mean Gaussian process. Any covariance is defined by the so called stable spline kernel which includes information on smooth exponential decay. We design a novel Markov chain Monte Carlo scheme able to reconstruct the impulse responses posterior by efficiently dealing with collinearity. Our scheme relies on a variation of the Gibbs sampling technique: beyond considering blocks forming a partition of the parameter space, some other (overlapping) blocks are also updated on the basis of the level of collinearity of the system inputs. Theoretical properties of the algorithm are studied obtaining its convergence rate. Numerical experiments are included using systems containing hundreds of impulse responses and highly correlated inputs.

Many tasks in natural language processing can be viewed as multi-label classification problems. However, most of the existing models are trained with the standard cross-entropy loss function and use a fixed prediction policy (e.g., a threshold of 0.5) for all the labels, which completely ignores the complexity and dependencies among different labels. In this paper, we propose a meta-learning method to capture these complex label dependencies. More specifically, our method utilizes a meta-learner to jointly learn the training policies and prediction policies for different labels. The training policies are then used to train the classifier with the cross-entropy loss function, and the prediction policies are further implemented for prediction. Experimental results on fine-grained entity typing and text classification demonstrate that our proposed method can obtain more accurate multi-label classification results.

北京阿比特科技有限公司