亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper proposes a macroscopic model to describe the equilibrium distribution of passenger arrivals for the morning commute problem in a congested urban rail transit system. We use a macroscopic train operation sub-model developed by Seo et al (2017a,b) to express the interaction between the dynamics of passengers and trains in a simplified manner while maintaining their essential physical relations. The equilibrium conditions of the proposed model are derived and a solution method is provided. The characteristics of the equilibrium are then examined through analytical discussion and numerical examples. As an application of the proposed model, we analyze a simple time-dependent timetable optimization problem with equilibrium constraints and reveal that a "capacity increasing paradox" exists such that a higher dispatch frequency can increase the equilibrium cost. Furthermore, insights into the design of the timetable are obtained and the timetable influence on passengers' equilibrium travel costs are evaluated.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · CASE · 傳感器 · 優化器 · 閾值 ·
2022 年 2 月 4 日

We study distributed binary hypothesis testing with a single sensor and two remote decision centers that are also equipped with local sensors. The communication between the sensor and the two decision centers takes place over three links: a shared link to both centers and an individual link to each of the two centers. All communication links are subject to expected rate constraints. This paper characterizes the optimal exponents region of the type-II error for given type-I error thresholds at the two decision centers and further simplifies the expressions in the special case of having only the single shared link. The exponents region illustrates a gain under expected rate constraints compared to equivalent maximum rate constraints. Moreover, it exhibits a tradeoff between the exponents achieved at the two centers.

With the increasing number of wireless communication systems and the demand for bandwidth, the wireless medium has become a congested and contested environment. Operating under such an environment brings several challenges, especially for military communication systems, which need to guarantee reliable communication while avoiding interfering with other friendly or neutral systems and denying the enemy systems of service. In this work, we investigate a novel application of Rate-Splitting Multiple Access(RSMA) for joint communications and jamming with a Multi-Carrier(MC) waveform in a multiantenna Cognitive Radio(CR) system. RSMA is a robust multiple access scheme for downlink multi-antenna wireless networks. RSMA relies on multi-antenna Rate-Splitting (RS) at the transmitter and Successive Interference Cancellation (SIC) at the receivers. Our aim is to simultaneously communicate with Secondary Users(SUs) and jam Adversarial Users(AUs) to disrupt their communications while limiting the interference to Primary Users(PUs) in a setting where all users perform broadband communications by MC waveforms in their respective networks. We consider the practical setting of imperfect CSI at transmitter(CSIT) for the SUs and PUs, and statistical CSIT for AUs. We formulate a problem to obtain optimal precoders which maximize the mutual information under interference and jamming power constraints. We propose an Alternating Optimization-Alternating Direction Method of Multipliers(AOADMM) based algorithm for solving the resulting non-convex problem. We perform an analysis based on Karush-Kuhn-Tucker conditions to determine the optimal jamming and interference power thresholds that guarantee the feasibility of problem and propose a practical algorithm to calculate the interference power threshold. By simulations, we show that RSMA achieves a higher sum-rate than Space Division Multiple Access(SDMA).

Traditional industrial systems, e.g., power plants, water treatment plants, etc., were built to operate highly isolated and controlled capacity. Recently, Industrial Control Systems (ICSs) have been exposed to the Internet for ease of access and adaptation to advanced technologies. However, it creates security vulnerabilities. Attackers often exploit these vulnerabilities to launch an attack on ICSs. Towards this, threat hunting is performed to proactively monitor the security of ICS networks and protect them against threats that could make the systems malfunction. A threat hunter manually identifies threats and provides a hypothesis based on the available threat intelligence. In this paper, we motivate the gap in lacking research in the automation of threat hunting in ICS networks. We propose an automated extraction of threat intelligence and the generation and validation of a hypothesis. We present an automated threat hunting framework based on threat intelligence provided by the ICS MITRE ATT&CK framework to automate the tasks. Unlike the existing hunting solutions which are cloud-based, costly and prone to human errors, our solution is a central and open-source implemented using different open-source technologies, e.g., Elasticsearch, Conpot, Metasploit, Web Single Page Application (SPA), and a machine learning analyser. Our results demonstrate that the proposed threat hunting solution can identify the network's attacks and alert a threat hunter with a hypothesis generated based on the techniques, tactics, and procedures (TTPs) from ICS MITRE ATT&CK. Then, a machine learning classifier automatically predicts the future actions of the attack.

We consider the problem of estimating channel fading coefficients (modeled as a correlated Gaussian vector) via Downlink (DL) training and Uplink (UL) feedback in wideband FDD massive MIMO systems. Using rate-distortion theory, we derive optimal bounds on the achievable channel state estimation error in terms of the number of training pilots in DL ($\beta_{tr}$) and feedback dimension in UL ($\beta_{fb}$), with random, spatially isotropic pilots. It is shown that when the number of training pilots exceeds the channel covariance rank ($r$), the optimal rate-distortion feedback strategy achieves an estimation error decay of $\Theta (SNR^{-\alpha})$ in estimating the channel state, where $\alpha = min (\beta_{fb}/r , 1)$ is the so-called quality scaling exponent. We also discuss an "analog" feedback strategy, showing that it can achieve the optimal quality scaling exponent for a wide range of training and feedback dimensions with no channel covariance knowledge and simple signal processing at the user side. Our findings are supported by numerical simulations comparing various strategies in terms of channel state mean squared error and achievable ergodic sum-rate in DL with zero-forcing precoding.

At the high level, the fundamental differences between materials originate from the unique nature of the constituent chemical elements. Before specific differences emerge according to the precise ratios of elements (composition) in a given crystal structure (phase), the material can be represented by its phase field defined simply as the set of the constituent chemical elements. Classification of the materials at the level of their phase fields can accelerate materials discovery by selecting the elemental combinations that are likely to produce desirable functional properties in synthetically accessible materials. Here, we demonstrate that classification of the materials phase field with respect to the maximum expected value of a target functional property can be combined with the ranking of the materials synthetic accessibility. This end-to-end machine learning approach (PhaseSelect) first derives the atomic characteristics from the compositional environments in all computationally and experimentally explored materials and then employs these characteristics to classify the phase field by their merit. PhaseSelect can quantify the materials potential at the level of the periodic table, which we demonstrate with significant accuracy for three avenues of materials applications: high-temperature superconducting, high-temperature magnetic and targetted energy band gap materials.

We derive minimax testing errors in a distributed framework where the data is split over multiple machines and their communication to a central machine is limited to $b$ bits. We investigate both the $d$- and infinite-dimensional signal detection problem under Gaussian white noise. We also derive distributed testing algorithms reaching the theoretical lower bounds. Our results show that distributed testing is subject to fundamentally different phenomena that are not observed in distributed estimation. Among our findings, we show that testing protocols that have access to shared randomness can perform strictly better in some regimes than those that do not. Furthermore, we show that consistent nonparametric distributed testing is always possible, even with as little as $1$-bit of communication and the corresponding test outperforms the best local test using only the information available at a single local machine.

Time series forecasting is widely used in business intelligence, e.g., forecast stock market price, sales, and help the analysis of data trend. Most time series of interest are macroscopic time series that are aggregated from microscopic data. However, instead of directly modeling the macroscopic time series, rare literature studied the forecasting of macroscopic time series by leveraging data on the microscopic level. In this paper, we assume that the microscopic time series follow some unknown mixture probabilistic distributions. We theoretically show that as we identify the ground truth latent mixture components, the estimation of time series from each component could be improved because of lower variance, thus benefitting the estimation of macroscopic time series as well. Inspired by the power of Seq2seq and its variants on the modeling of time series data, we propose Mixture of Seq2seq (MixSeq), an end2end mixture model to cluster microscopic time series, where all the components come from a family of Seq2seq models parameterized by different parameters. Extensive experiments on both synthetic and real-world data show the superiority of our approach.

Mobile network that millions of people use every day is one of the most complex systems in real world. Optimization of mobile network to meet exploding customer demand and reduce CAPEX/OPEX poses greater challenges than in prior works. Actually, learning to solve complex problems in real world to benefit everyone and make the world better has long been ultimate goal of AI. However, application of deep reinforcement learning (DRL) to complex problems in real world still remains unsolved, due to imperfect information, data scarcity and complex rules in real world, potential negative impact to real world, etc. To bridge this reality gap, we propose a sim-to-real framework to direct transfer learning from simulation to real world without any training in real world. First, we distill temporal-spatial relationships between cells and mobile users to scalable 3D image-like tensor to best characterize partially observed mobile network. Second, inspired by AlphaGo, we introduce a novel self-play mechanism to empower DRL agents to gradually improve intelligence by competing for best record on multiple tasks, just like athletes compete for world record in decathlon. Third, a decentralized DRL method is proposed to coordinate multi-agents to compete and cooperate as a team to maximize global reward and minimize potential negative impact. Using 7693 unseen test tasks over 160 unseen mobile networks in another simulator as well as 6 field trials on 4 commercial mobile networks in real world, we demonstrate the capability of this sim-to-real framework to direct transfer the learning not only from one simulator to another simulator, but also from simulation to real world. This is the first time that a DRL agent successfully transfers its learning directly from simulation to very complex real world problems with imperfect information, complex rules, huge state/action space, and multi-agent interactions.

We propose a temporally coherent generative model addressing the super-resolution problem for fluid flows. Our work represents a first approach to synthesize four-dimensional physics fields with neural networks. Based on a conditional generative adversarial network that is designed for the inference of three-dimensional volumetric data, our model generates consistent and detailed results by using a novel temporal discriminator, in addition to the commonly used spatial one. Our experiments show that the generator is able to infer more realistic high-resolution details by using additional physical quantities, such as low-resolution velocities or vorticities. Besides improvements in the training process and in the generated outputs, these inputs offer means for artistic control as well. We additionally employ a physics-aware data augmentation step, which is crucial to avoid overfitting and to reduce memory requirements. In this way, our network learns to generate advected quantities with highly detailed, realistic, and temporally coherent features. Our method works instantaneously, using only a single time-step of low-resolution fluid data. We demonstrate the abilities of our method using a variety of complex inputs and applications in two and three dimensions.

In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.

北京阿比特科技有限公司