亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Unbalanced Job Approximation - UJA is a family of low-cost formulas to obtain the throughput of Queueing Networks - QNs with fixed rate servers using Taylor series expansion of job loadings with respect to the mean loading. UJA with one term yields the same throughput as optimistic Balanced Job Bound - BJB, which at some point exceeds the maximum asymptotic throughput. The accuracy of the estimated throughput increases with more terms in the Taylor series. UJA can be used in parametric studies by reducing the cost of solving large QNs by aggregating stations into a single Flow-Equivalent Service Center - FESCs defined by its throughput characteristic. While UJA has been extended to two classes it may be applied to more classes by job class aggregation. BJB has been extended to QNs with delay servers and multiple jobs classes by Eager and Sevcik, throughput bounds by Eager and Sevcik, Kriz, Proportional Bound - PB and Prop. Approximation Bound - PAM by Hsieh and Lam and Geometric Bound - GB by Casale et al. are reviewed.

相關內容

泰勒級數的定義 若函數f(x)在點的某一鄰域內具有直到(n+1)階導數,則在該鄰域內f(x)的n階泰勒公式為: f(x)=f(x0)+f`( x0)(x- x0)+f``( x0)(x-x0)2/2!+f```( x0)(x- x0)3/3!+...fn(x0)(x- x0)n/n!+.... 其中:fn(x0)(x- x0)n/n!,稱為拉格朗日余項。 以上函數展開式稱為泰勒級數。

Partial differential equations (PDEs) are ubiquitous in the world around us, modelling phenomena from heat and sound to quantum systems. Recent advances in deep learning have resulted in the development of powerful neural solvers; however, while these methods have demonstrated state-of-the-art performance in both accuracy and computational efficiency, a significant challenge remains in their interpretability. Most existing methodologies prioritize predictive accuracy over clarity in the underlying mechanisms driving the model's decisions. Interpretability is crucial for trustworthiness and broader applicability, especially in scientific and engineering domains where neural PDE solvers might see the most impact. In this context, a notable gap in current research is the integration of symbolic frameworks (such as symbolic regression) into these solvers. Symbolic frameworks have the potential to distill complex neural operations into human-readable mathematical expressions, bridging the divide between black-box predictions and solutions.

A system of partial differential equations (PDE) of a heat-transferring copper rod and a magnetizable piezoelectric beam, describing the longitudinal vibrations and the total charge accumulation at the electrodes of the beam, is considered in the transmission line setting. For magnetizable piezoelectric beams, traveling electromagnetic and mechanical waves are able to interact strongly despite a huge difference in velocities. It is known that the heat and beam interactions in the open-loop setting does not yield exponentially stability with the thermal effects only. Therefore, two types of boundary-type state feedback controllers are proposed. (i) Both feedback controllers are chosen static. (ii) The electrical controller of the piezoelectric beam is chosen dynamic to accelerate the system dynamics. The PDE system for each case is shown to have exponentially stable solutions by cleverly-constructed Lyapunov functions with various multipliers. The proposed proof technique is in line with proving the exponential stability of Finite-Difference-based robust model reductions as the discretization parameter tends to zero.

A hydraulic fracturing system with super-hydrophobic proppants is characterized by a transient triple-porosity Navier-Stokes model. For this complex multiphysics system, particularly in the context of three-dimensional space, a local parallel and non-iterative finite element method based on two-grid discretizations is proposed. The underlying idea behind utilizing the local parallel approach is to combine a decoupled method, a two-grid method and a domain decomposition method. The strategy allows us to initially capture low-frequency data across the decoupled domain using a coarse grid. Then it tackles high-frequency components by solving residual equations within overlapping subdomains by employing finer grids and local parallel procedures at each time step. By utilizing this approach, a significant improvement in computational efficiency can be achieved. Furthermore, the convergence results of the approximate solutions from the algorithm are obtained. Finally, we perform 2D/3D numerical experiments to demonstrate the effectiveness and efficiency of the algorithm as well as to illustrate its advantages in application.

Despite recent attention and exploration of depth for various tasks, it is still an unexplored modality for weakly-supervised object detection (WSOD). We propose an amplifier method for enhancing the performance of WSOD by integrating depth information. Our approach can be applied to any WSOD method based on multiple-instance learning, without necessitating additional annotations or inducing large computational expenses. Our proposed method employs a monocular depth estimation technique to obtain hallucinated depth information, which is then incorporated into a Siamese WSOD network using contrastive loss and fusion. By analyzing the relationship between language context and depth, we calculate depth priors to identify the bounding box proposals that may contain an object of interest. These depth priors are then utilized to update the list of pseudo ground-truth boxes, or adjust the confidence of per-box predictions. Our proposed method is evaluated on six datasets (COCO, PASCAL VOC, Conceptual Captions, Clipart1k, Watercolor2k, and Comic2k) by implementing it on top of two state-of-the-art WSOD methods, and we demonstrate a substantial enhancement in performance.

The ongoing change in Earth`s climate is causing an increase in the frequency and severity of climate-related hazards, for example, from coastal flooding, riverine flooding, and tropical cyclones. There is currently an urgent need to quantify the potential impacts of these events on infrastructure and users, especially for hitherto neglected infrastructure sectors, such as telecommunications, particularly given our increasing dependence on digital technologies. In this analysis a global assessment is undertaken, quantifying the number of mobile cells vulnerable to climate hazards using open crowdsourced data equating to 7.6 million 2G, 3G, 4G and 5G assets. For a 0.01% annual probability event under a high emissions scenario (RCP8.5), the number of affected cells is estimated at 2.26 million for tropical cyclones, equating to USD 1.01 billion in direct damage (an increase against the historical baseline of 14% and 44%, respectively). Equally, for coastal flooding the number of potentially affected cells for an event with a 0.01% annual probability under RCP8.5 is 109.9 thousand, equating to direct damage costs of USD 2.69 billion (an increase against the baseline of 70% and 78%, respectively). The findings demonstrate the need for risk analysts to include mobile communications (and telecommunications more broadly) in future critical national infrastructure assessments. Indeed, this paper contributes a proven assessment methodology to the literature for use in future research for assessing this critical infrastructure sector.

Explainable Artificial Intelligence (XAI) is transforming the field of Artificial Intelligence (AI) by enhancing the trust of end-users in machines. As the number of connected devices keeps on growing, the Internet of Things (IoT) market needs to be trustworthy for the end-users. However, existing literature still lacks a systematic and comprehensive survey work on the use of XAI for IoT. To bridge this lacking, in this paper, we address the XAI frameworks with a focus on their characteristics and support for IoT. We illustrate the widely-used XAI services for IoT applications, such as security enhancement, Internet of Medical Things (IoMT), Industrial IoT (IIoT), and Internet of City Things (IoCT). We also suggest the implementation choice of XAI models over IoT systems in these applications with appropriate examples and summarize the key inferences for future works. Moreover, we present the cutting-edge development in edge XAI structures and the support of sixth-generation (6G) communication services for IoT applications, along with key inferences. In a nutshell, this paper constitutes the first holistic compilation on the development of XAI-based frameworks tailored for the demands of future IoT use cases.

Graph Convolutional Networks (GCNs) have been widely applied in various fields due to their significant power on processing graph-structured data. Typical GCN and its variants work under a homophily assumption (i.e., nodes with same class are prone to connect to each other), while ignoring the heterophily which exists in many real-world networks (i.e., nodes with different classes tend to form edges). Existing methods deal with heterophily by mainly aggregating higher-order neighborhoods or combing the immediate representations, which leads to noise and irrelevant information in the result. But these methods did not change the propagation mechanism which works under homophily assumption (that is a fundamental part of GCNs). This makes it difficult to distinguish the representation of nodes from different classes. To address this problem, in this paper we design a novel propagation mechanism, which can automatically change the propagation and aggregation process according to homophily or heterophily between node pairs. To adaptively learn the propagation process, we introduce two measurements of homophily degree between node pairs, which is learned based on topological and attribute information, respectively. Then we incorporate the learnable homophily degree into the graph convolution framework, which is trained in an end-to-end schema, enabling it to go beyond the assumption of homophily. More importantly, we theoretically prove that our model can constrain the similarity of representations between nodes according to their homophily degree. Experiments on seven real-world datasets demonstrate that this new approach outperforms the state-of-the-art methods under heterophily or low homophily, and gains competitive performance under homophily.

Recently, Mutual Information (MI) has attracted attention in bounding the generalization error of Deep Neural Networks (DNNs). However, it is intractable to accurately estimate the MI in DNNs, thus most previous works have to relax the MI bound, which in turn weakens the information theoretic explanation for generalization. To address the limitation, this paper introduces a probabilistic representation of DNNs for accurately estimating the MI. Leveraging the proposed MI estimator, we validate the information theoretic explanation for generalization, and derive a tighter generalization bound than the state-of-the-art relaxations.

Within the rapidly developing Internet of Things (IoT), numerous and diverse physical devices, Edge devices, Cloud infrastructure, and their quality of service requirements (QoS), need to be represented within a unified specification in order to enable rapid IoT application development, monitoring, and dynamic reconfiguration. But heterogeneities among different configuration knowledge representation models pose limitations for acquisition, discovery and curation of configuration knowledge for coordinated IoT applications. This paper proposes a unified data model to represent IoT resource configuration knowledge artifacts. It also proposes IoT-CANE (Context-Aware recommendatioN systEm) to facilitate incremental knowledge acquisition and declarative context driven knowledge recommendation.

While it is nearly effortless for humans to quickly assess the perceptual similarity between two images, the underlying processes are thought to be quite complex. Despite this, the most widely used perceptual metrics today, such as PSNR and SSIM, are simple, shallow functions, and fail to account for many nuances of human perception. Recently, the deep learning community has found that features of the VGG network trained on the ImageNet classification task has been remarkably useful as a training loss for image synthesis. But how perceptual are these so-called "perceptual losses"? What elements are critical for their success? To answer these questions, we introduce a new Full Reference Image Quality Assessment (FR-IQA) dataset of perceptual human judgments, orders of magnitude larger than previous datasets. We systematically evaluate deep features across different architectures and tasks and compare them with classic metrics. We find that deep features outperform all previous metrics by huge margins. More surprisingly, this result is not restricted to ImageNet-trained VGG features, but holds across different deep architectures and levels of supervision (supervised, self-supervised, or even unsupervised). Our results suggest that perceptual similarity is an emergent property shared across deep visual representations.

北京阿比特科技有限公司