亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The design of heat exchanger fields is a key phase to ensure the long-term sustainability of such renewable energy systems. This task has to be accomplished by modelling the relevant processes in the complex system made up of different exchangers, where the heat transfer must be considered within exchangers and outside exchangers. We propose a mathematical model for the study of the heat conduction into the soil as consequence of the presence of exchangers. Such a problem is formulated and solved with an analytical approach. On the basis of such analytical solution, we propose an optimisation procedure to compute the best position of the exchangers by minimising the adverse effects of neighbouring devices. Some numerical experiments are used to show the effectiveness of the proposed method also by taking into account a reference approximation procedure of the problem based on a finite difference method.

相關內容

This paper introduces an integrated lot sizing and scheduling problem inspired from a real-world application in off-the-road tire industry. This problem considers the assignment of different items on parallel machines with complex eligibility constraints within a finite planning horizon. It also considers a large panel of specific constraints such as: backordering, a limited number of setups, upstream resources saturation and customers prioritization. A novel mixed integer formulation is proposed with the objective of optimizing different normalized criteria related to the inventory and service level performance. Based on this mathematical formulation, a problem-based matheuristic method that solves the lot sizing and assignment problems separately is proposed to solve the industrial case. A computational study and sensitivity analysis are carried out based on real-world data with up to 170 products, 70 unrelated parallel machines and 42 periods. The obtained results show the effectiveness of the proposed approach on improving the company's solution. Indeed, the two most important KPIs for the management have been optimized of respectively 32% for the backorders and 13% for the overstock. Moreover, the computational time have been reduced significantly.

We use a numerical-analytic technique to construct a sequence of successive approximations to the solution of a system of fractional differential equations, subject to Dirichlet boundary conditions. We prove the uniform convergence of the sequence of approximations to a limit function, which is the unique solution to the boundary value problem under consideration, and give necessary and sufficient conditions for the existence of solutions. The obtained theoretical results are confirmed by a model example.

Non-terrestrial networks (NTNs), which integrate space and aerial networks with terrestrial systems, are a key area in the emerging sixth-generation (6G) wireless networks. As part of 6G, NTNs must provide pervasive connectivity to a wide range of devices, including smartphones, vehicles, sensors, robots, and maritime users. However, due to the high mobility and deployment of NTNs, managing the space-air-sea (SAS) NTN resources, i.e., energy, power, and channel allocation, is a major challenge. The design of a SAS-NTN for energy-efficient resource allocation is investigated in this study. The goal is to maximize system energy efficiency (EE) by collaboratively optimizing user equipment (UE) association, power control, and unmanned aerial vehicle (UAV) deployment. Given the limited payloads of UAVs, this work focuses on minimizing the total energy cost of UAVs (trajectory and transmission) while meeting EE requirements. A mixed-integer nonlinear programming problem is proposed, followed by the development of an algorithm to decompose, and solve each problem distributedly. The binary (UE association) and continuous (power, deployment) variables are separated using the Bender decomposition (BD), and then the Dinkelbach algorithm (DA) is used to convert fractional programming into an equivalent solvable form in the subproblem. A standard optimization solver is utilized to deal with the complexity of the master problem for binary variables. The alternating direction method of multipliers (ADMM) algorithm is used to solve the subproblem for the continuous variables. Our proposed algorithm provides a suboptimal solution, and simulation results demonstrate that the proposed algorithm achieves better EE than baselines.

Various algorithms for reinforcement learning (RL) exhibit dramatic variation in their convergence rates as a function of problem structure. Such problem-dependent behavior is not captured by worst-case analyses and has accordingly inspired a growing effort in obtaining instance-dependent guarantees and deriving instance-optimal algorithms for RL problems. This research has been carried out, however, primarily within the confines of theory, providing guarantees that explain \textit{ex post} the performance differences observed. A natural next step is to convert these theoretical guarantees into guidelines that are useful in practice. We address the problem of obtaining sharp instance-dependent confidence regions for the policy evaluation problem and the optimal value estimation problem of an MDP, given access to an instance-optimal algorithm. As a consequence, we propose a data-dependent stopping rule for instance-optimal algorithms. The proposed stopping rule adapts to the instance-specific difficulty of the problem and allows for early termination for problems with favorable structure.

The additive hazards model specifies the effect of covariates on the hazard in an additive way, in contrast to the popular Cox model, in which it is multiplicative. As non-parametric model, it offers a very flexible way of modeling time-varying covariate effects. It is most commonly estimated by ordinary least squares. In this paper we consider the case where covariates are bounded, and derive the maximum likelihood estimator under the constraint that the hazard is non-negative for all covariate values in their domain. We describe an efficient algorithm to find the maximum likelihood estimator. The method is contrasted with the ordinary least squares approach in a simulation study, and the method is illustrated on a realistic data set.

In this work, we introduce a time memory formalism in poroelasticity model that couples the pressure and displacement. We assume this multiphysics process occurs in multicontinuum media. The mathematical model contains a coupled system of equations for pressures in each continuum and elasticity equations for displacements of the medium. We assume that the temporal dynamics is governed by fractional derivatives following some works in the literature. We derive an implicit finite difference approximation for time discretization based on the Caputo time fractional derivative. A Discrete Fracture Model (DFM) is used to model fluid flow through fractures and treat the complex network of fractures. We assume different fractional powers in fractures and matrix due to slow and fast dynamics. We develop a coarse grid approximation based on the Generalized Multiscale Finite Element Method (GMsFEM), where we solve local spectral problems for construction of the multiscale basis functions. We present numerical results for the two-dimensional model problems in fractured heterogeneous porous media. We investigate error analysis between reference (fine-scale) solution and multiscale solution with different numbers of multiscale basis functions. The results show that the proposed method can provide good accuracy on a coarse grid.

We couple the L1 discretization for Caputo derivative in time with spectral Galerkin method in space to devise a scheme that solves quasilinear subdiffusion equations. Both the diffusivity and the source are allowed to be nonlinear functions of the solution. We prove method's stability and convergence with spectral accuracy in space. The temporal order depends on solution's regularity in time. Further, we support our results with numerical simulations that utilize parallelism for spatial discretization. Moreover, as a side result we find asymptotic exact values of error constants along with their remainders for discretizations of Caputo derivative and fractional integrals. These constants are the smallest possible which improves the previously established results from the literature.

This dissertation studies a fundamental open challenge in deep learning theory: why do deep networks generalize well even while being overparameterized, unregularized and fitting the training data to zero error? In the first part of the thesis, we will empirically study how training deep networks via stochastic gradient descent implicitly controls the networks' capacity. Subsequently, to show how this leads to better generalization, we will derive {\em data-dependent} {\em uniform-convergence-based} generalization bounds with improved dependencies on the parameter count. Uniform convergence has in fact been the most widely used tool in deep learning literature, thanks to its simplicity and generality. Given its popularity, in this thesis, we will also take a step back to identify the fundamental limits of uniform convergence as a tool to explain generalization. In particular, we will show that in some example overparameterized settings, {\em any} uniform convergence bound will provide only a vacuous generalization bound. With this realization in mind, in the last part of the thesis, we will change course and introduce an {\em empirical} technique to estimate generalization using unlabeled data. Our technique does not rely on any notion of uniform-convergece-based complexity and is remarkably precise. We will theoretically show why our technique enjoys such precision. We will conclude by discussing how future work could explore novel ways to incorporate distributional assumptions in generalization bounds (such as in the form of unlabeled data) and explore other tools to derive bounds, perhaps by modifying uniform convergence or by developing completely new tools altogether.

In two-phase image segmentation, convex relaxation has allowed global minimisers to be computed for a variety of data fitting terms. Many efficient approaches exist to compute a solution quickly. However, we consider whether the nature of the data fitting in this formulation allows for reasonable assumptions to be made about the solution that can improve the computational performance further. In particular, we employ a well known dual formulation of this problem and solve the corresponding equations in a restricted domain. We present experimental results that explore the dependence of the solution on this restriction and quantify imrovements in the computational performance. This approach can be extended to analogous methods simply and could provide an efficient alternative for problems of this type.

This paper proposes a Reinforcement Learning (RL) algorithm to synthesize policies for a Markov Decision Process (MDP), such that a linear time property is satisfied. We convert the property into a Limit Deterministic Buchi Automaton (LDBA), then construct a product MDP between the automaton and the original MDP. A reward function is then assigned to the states of the product automaton, according to accepting conditions of the LDBA. With this reward function, our algorithm synthesizes a policy that satisfies the linear time property: as such, the policy synthesis procedure is "constrained" by the given specification. Additionally, we show that the RL procedure sets up an online value iteration method to calculate the maximum probability of satisfying the given property, at any given state of the MDP - a convergence proof for the procedure is provided. Finally, the performance of the algorithm is evaluated via a set of numerical examples. We observe an improvement of one order of magnitude in the number of iterations required for the synthesis compared to existing approaches.

北京阿比特科技有限公司