亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Placing applications in mobile edge computing servers presents a complex challenge involving many servers, users, and their requests. Existing algorithms take a long time to solve high-dimensional problems with significant uncertainty scenarios. Therefore, an efficient approach is required to maximize the quality of service while considering all technical constraints. One of these approaches is machine learning, which emulates optimal solutions for application placement in edge servers. Machine learning models are expected to learn how to allocate user requests to servers based on the spatial positions of users and servers. In this study, the problem is formulated as a two-stage stochastic programming. A sufficient amount of training records is generated by varying parameters such as user locations, their request rates, and solving the optimization model. Then, based on the distance features of each user from the available servers and their request rates, machine learning models generate decision variables for the first stage of the stochastic optimization model, which is the user-to-server request allocation, and are employed as independent decision agents that reliably mimic the optimization model. Support Vector Machines (SVM) and Multi-layer Perceptron (MLP) are used in this research to achieve practical decisions from the stochastic optimization models. The performance of each model has shown an execution effectiveness of over 80%. This research aims to provide a more efficient approach for tackling high-dimensional problems and scenarios with uncertainties in mobile edge computing by leveraging machine learning models for optimal decision-making in request allocation to edge servers. These results suggest that machine-learning models can significantly improve solution times compared to conventional approaches.

相關內容

Numerous online services are data-driven: the behavior of users affects the system's parameters, and the system's parameters affect the users' experience of the service, which in turn affects the way users may interact with the system. For example, people may choose to use a service only for tasks that already works well, or they may choose to switch to a different service. These adaptations influence the ability of a system to learn about a population of users and tasks in order to improve its performance broadly. In this work, we analyze a class of such dynamics -- where users allocate their participation amongst services to reduce the individual risk they experience, and services update their model parameters to reduce the service's risk on their current user population. We refer to these dynamics as \emph{risk-reducing}, which cover a broad class of common model updates including gradient descent and multiplicative weights. For this general class of dynamics, we show that asymptotically stable equilibria are always segmented, with sub-populations allocated to a single learner. Under mild assumptions, the utilitarian social optimum is a stable equilibrium. In contrast to previous work, which shows that repeated risk minimization can result in (Hashimoto et al., 2018; Miller et al., 2021), we find that repeated myopic updates with multiple learners lead to better outcomes. We illustrate the phenomena via a simulated example initialized from real data.

We develop a novel deep learning technique, termed Deep Orthogonal Decomposition (DOD), for dimensionality reduction and reduced order modeling of parameter dependent partial differential equations. The approach consists in the construction of a deep neural network model that approximates the solution manifold through a continuously adaptive local basis. In contrast to global methods, such as Principal Orthogonal Decomposition (POD), the adaptivity allows the DOD to overcome the Kolmogorov barrier, making the approach applicable to a wide spectrum of parametric problems. Furthermore, due to its hybrid linear-nonlinear nature, the DOD can accommodate both intrusive and nonintrusive techniques, providing highly interpretable latent representations and tighter control on error propagation. For this reason, the proposed approach stands out as a valuable alternative to other nonlinear techniques, such as deep autoencoders. The methodology is discussed both theoretically and practically, evaluating its performances on problems featuring nonlinear PDEs, singularities, and parametrized geometries.

Today, more and more embedded devices are being connected through a network, generally Internet, offering users different services. This concept refers to Internet of Things (IoT), bringing information and control capabilities in many fields like medicine, smart homes, home security, etc. Main drawbacks of IoT environment are its dependency on Internet connectivity and need continuous devices power. These dependencies may affect system performances, namely request processing response times. In this context, we propose in this paper a continuous performance monitoring methodology, applied on IoT systems based on Publish/subscribe communication model. Our approach assesses performances using Stochastic Petri net modeling, and self-optimizes whenever poor performances are detected. Our approach relies on a Stochastic Petri nets modelling and analysis to assess performances. We target improving performances, in particular response times, by online modification of influencing factors.

The complex information processing system of humans generates a lot of objective and subjective evaluations, making the exploration of human cognitive products of great cutting-edge theoretical value. In recent years, deep learning technologies, which are inspired by biological brain mechanisms, have made significant strides in the application of psychological or cognitive scientific research, particularly in the memorization and recognition of facial data. This paper investigates through experimental research how neural networks process and store facial expression data and associate these data with a range of psychological attributes produced by humans. Researchers utilized deep learning model VGG16, demonstrating that neural networks can learn and reproduce key features of facial data, thereby storing image memories. Moreover, the experimental results reveal the potential of deep learning models in understanding human emotions and cognitive processes and establish a manifold visualization interpretation of cognitive products or psychological attributes from a non-Euclidean space perspective, offering new insights into enhancing the explainability of AI. This study not only advances the application of AI technology in the field of psychology but also provides a new psychological theoretical understanding the information processing of the AI. The code is available in here: //github.com/NKUShaw/Psychoinformatics.

We propose a comprehensive framework for policy gradient methods tailored to continuous time reinforcement learning. This is based on the connection between stochastic control problems and randomised problems, enabling applications across various classes of Markovian continuous time control problems, beyond diffusion models, including e.g. regular, impulse and optimal stopping/switching problems. By utilizing change of measure in the control randomisation technique, we derive a new policy gradient representation for these randomised problems, featuring parametrised intensity policies. We further develop actor-critic algorithms specifically designed to address general Markovian stochastic control issues. Our framework is demonstrated through its application to optimal switching problems, with two numerical case studies in the energy sector focusing on real options.

Advanced Krylov subspace methods are investigated for the solution of large sparse linear systems arising from stiff adjoint-based aerodynamic shape optimization problems. A special attention is paid to the flexible inner-outer GMRES strategy combined with most relevant preconditioning and deflation techniques. The choice of this specific class of Krylov solvers for challenging problems is based on its outstanding convergence properties. Typically in our implementation the efficiency of the preconditioner is enhanced with a domain decomposition method with overlapping. However, maintaining the performance of the preconditioner may be challenging since scalability and efficiency of a preconditioning technique are properties often antagonistic to each other. In this paper we demonstrate how flexible inner-outer Krylov methods are able to overcome this critical issue. A numerical study is performed considering either a Finite Volume (FV), or a high-order Discontinuous Galerkin (DG) discretization which affect the arithmetic intensity and memory-bandwith of the algebraic operations. We consider test cases of transonic turbulent flows with RANS modelling over the two-dimensional supercritical OAT15A airfoil and the three-dimensional ONERA M6 wing. Benefits in terms of robustness and convergence compared to standard GMRES solvers are obtained. Strong scalability analysis shows satisfactory results. Based on these representative problems a discussion of the recommended numerical practices is proposed.

We study a query model of computation in which an n-vertex k-hypergraph can be accessed only via its independence oracle or via its colourful independence oracle, and each oracle query may incur a cost depending on the size of the query. In each of these models, we obtain oracle algorithms to approximately count the hypergraph's edges, and we unconditionally prove that no oracle algorithm for this problem can have significantly smaller worst-case oracle cost than our algorithms.

We consider covariance parameter estimation for Gaussian processes with functional inputs. From an increasing-domain asymptotics perspective, we prove the asymptotic consistency and normality of the maximum likelihood estimator. We extend these theoretical guarantees to encompass scenarios accounting for approximation errors in the inputs, which allows robustness of practical implementations relying on conventional sampling methods or projections onto a functional basis. Loosely speaking, both consistency and normality hold when the approximation error becomes negligible, a condition that is often achieved as the number of samples or basis functions becomes large. These later asymptotic properties are illustrated through analytical examples, including one that covers the case of non-randomly perturbed grids, as well as several numerical illustrations.

To efficiently tackle parametrized multi and/or large scale problems, we propose an adaptive localized model order reduction framework combining both local offline training and local online enrichment with localized error control. For the latter, we adapt the residual localization strategy introduced in [Buhr, Engwer, Ohlberger, Rave, SIAM J. Sci. Comput., 2017] which allows to derive a localized a posteriori error estimator that can be employed to adaptively enrich the reduced solution space locally where needed. Numerical experiments demonstrate the potential of the proposed approach.

This paper focuses on the distributed online convex optimization problem with time-varying inequality constraints over a network of agents, where each agent collaborates with its neighboring agents to minimize the cumulative network-wide loss over time. To reduce communication overhead between the agents, we propose a distributed event-triggered online primal-dual algorithm over a time-varying directed graph. With several classes of appropriately chose decreasing parameter sequences and non-increasing event-triggered threshold sequences, we establish dynamic network regret and network cumulative constraint violation bounds. Finally, a numerical simulation example is provided to verify the theoretical results.

北京阿比特科技有限公司