亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This manuscript investigates the problem of locational complexity, a type of complexity that emanates from a companys territorial strategy. Using an entropy-based measure for supply chain structural complexity ( pars-complexity), we develop a theoretical framework for analysing the effects of locational complexity on the profitability of service/manufacturing networks. The proposed model is used to shed light on the reasons why network restructuring strategies may result ineffective at reducing complexity-related costs. Our contribution is three-fold. First, we develop a novel mathematical formulation of a facility location problem that integrates the pars-complexity measure in the decision process. Second, using this model, we propose a decomposition of the penalties imposed by locational complexity into (a) an intrinsic cost of structural complexity; and (b) an avoidable cost of ignoring such complexity in the decision process. Such a decomposition is a valuable tool for identifying more effective measures for tackling locational complexity, moreover, it has allowed us to provide an explanation to the so-called addiction to growth within the locational context. Finally, we propose three alternative strategies that attempt to mimic different approaches used in practice by companies that have engaged in network restructuring processes. The impact of those approaches is evaluated through extensive numerical experiments. Our experimental results suggest that network restructuring efforts that are not accompanied by a substantial reduction on the target market of the company, fail at reducing complexity-related costs and, therefore, have a limited impact on the companys profitability.

相關內容

Networking:IFIP International Conferences on Networking。 Explanation:國際網絡會議。 Publisher:IFIP。 SIT:

We investigate a convective Brinkman--Forchheimer problem coupled with a heat transfer equation. The investigated model considers thermal diffusion and viscosity depending on the temperature. We prove the existence of a solution without restriction on the data and uniqueness when the solution is slightly smoother and the data is suitably restricted. We propose a finite element discretization scheme for the considered model and derive convergence results and a priori error estimates. Finally, we illustrate the theory with numerical examples.

Generative diffusion models have achieved spectacular performance in many areas of generative modeling. While the fundamental ideas behind these models come from non-equilibrium physics, variational inference and stochastic calculus, in this paper we show that many aspects of these models can be understood using the tools of equilibrium statistical mechanics. Using this reformulation, we show that generative diffusion models undergo second-order phase transitions corresponding to symmetry breaking phenomena. We show that these phase-transitions are always in a mean-field universality class, as they are the result of a self-consistency condition in the generative dynamics. We argue that the critical instability that arises from the phase transitions lies at the heart of their generative capabilities, which are characterized by a set of mean field critical exponents. Furthermore, using the statistical physics of disordered systems, we show that memorization can be understood as a form of critical condensation corresponding to a disordered phase transition. Finally, we show that the dynamic equation of the generative process can be interpreted as a stochastic adiabatic transformation that minimizes the free energy while keeping the system in thermal equilibrium.

The ParaDiag family of algorithms solves differential equations by using preconditioners that can be inverted in parallel through diagonalization. In the context of optimal control of linear parabolic PDEs, the state-of-the-art ParaDiag method is limited to solving self-adjoint problems with a tracking objective. We propose three improvements to the ParaDiag method: the use of alpha-circulant matrices to construct an alternative preconditioner, a generalization of the algorithm for solving non-self-adjoint equations, and the formulation of an algorithm for terminal-cost objectives. We present novel analytic results about the eigenvalues of the preconditioned systems for all discussed ParaDiag algorithms in the case of self-adjoint equations, which proves the favorable properties the alpha-circulant preconditioner. We use these results to perform a theoretical parallel-scaling analysis of ParaDiag for self-adjoint problems. Numerical tests confirm our findings and suggest that the self-adjoint behavior, which is backed by theory, generalizes to the non-self-adjoint case. We provide a sequential, open-source reference solver in Matlab for all discussed algorithms.

Numerical experiments indicate that deep learning algorithms overcome the curse of dimensionality when approximating solutions of semilinear PDEs. For certain linear PDEs and semilinear PDEs with gradient-independent nonlinearities this has also been proved mathematically, i.e., it has been shown that the number of parameters of the approximating DNN increases at most polynomially in both the PDE dimension $d\in \mathbb{N}$ and the reciprocal of the prescribed accuracy $\epsilon\in (0,1)$. The main contribution of this paper is to rigorously prove for the first time that deep neural networks can also overcome the curse dimensionality in the approximation of a certain class of nonlinear PDEs with gradient-dependent nonlinearities.

This article aims to provide approximate solutions for the non-linear collision-induced breakage equation using two different semi-analytical schemes, i.e., variational iteration method (VIM) and optimized decomposition method (ODM). The study also includes the detailed convergence analysis and error estimation for ODM in the case of product collisional ($K(\epsilon,\rho)=\epsilon\rho$) and breakage ($b(\epsilon,\rho,\sigma)=\frac{2}{\rho}$) kernels with an exponential decay initial condition. By contrasting estimated concentration function and moments with exact solutions, the novelty of the suggested approaches is presented considering three numerical examples. Interestingly, in one case, VIM provides a closed-form solution, however, finite term series solutions obtained via both schemes supply a great approximation for the concentration function and moments.

As large pre-trained image-processing neural networks are being embedded in autonomous agents such as self-driving cars or robots, the question arises of how such systems can communicate with each other about the surrounding world, despite their different architectures and training regimes. As a first step in this direction, we systematically explore the task of \textit{referential communication} in a community of heterogeneous state-of-the-art pre-trained visual networks, showing that they can develop, in a self-supervised way, a shared protocol to refer to a target object among a set of candidates. This shared protocol can also be used, to some extent, to communicate about previously unseen object categories of different granularity. Moreover, a visual network that was not initially part of an existing community can learn the community's protocol with remarkable ease. Finally, we study, both qualitatively and quantitatively, the properties of the emergent protocol, providing some evidence that it is capturing high-level semantic features of objects.

We introduce a multiple testing procedure that controls the median of the proportion of false discoveries (FDP) in a flexible way. The procedure only requires a vector of p-values as input and is comparable to the Benjamini-Hochberg method, which controls the mean of the FDP. Our method allows freely choosing one or several values of alpha after seeing the data -- unlike Benjamini-Hochberg, which can be very liberal when alpha is chosen post hoc. We prove these claims and illustrate them with simulations. Our procedure is inspired by a popular estimator of the total number of true hypotheses. We adapt this estimator to provide simultaneously median unbiased estimators of the FDP, valid for finite samples. This simultaneity allows for the claimed flexibility. Our approach does not assume independence. The time complexity of our method is linear in the number of hypotheses, after sorting the p-values.

Marginal structural models have been increasingly used by analysts in recent years to account for confounding bias in studies with time-varying treatments. The parameters of these models are often estimated using inverse probability of treatment weighting. To ensure that the estimated weights adequately control confounding, it is possible to check for residual imbalance between treatment groups in the weighted data. Several balance metrics have been developed and compared in the cross-sectional case but have not yet been evaluated and compared in longitudinal studies with time-varying treatment. We have first extended the definition of several balance metrics to the case of a time-varying treatment, with or without censoring. We then compared the performance of these balance metrics in a simulation study by assessing the strength of the association between their estimated level of imbalance and bias. We found that the Mahalanobis balance performed best.Finally, the method was illustrated for estimating the cumulative effect of statins exposure over one year on the risk of cardiovascular disease or death in people aged 65 and over in population-wide administrative data. This illustration confirms the feasibility of employing our proposed metrics in large databases with multiple time-points.

Surrogate modelling techniques have seen growing attention in recent years when applied to both modelling and optimisation of industrial design problems. These techniques are highly relevant when assessing the performance of a particular design carries a high cost, as the overall cost can be mitigated via the construction of a model to be queried in lieu of the available high-cost source. The construction of these models can sometimes employ other sources of information which are both cheaper and less accurate. The existence of these sources however poses the question of which sources should be used when constructing a model. Recent studies have attempted to characterise harmful data sources to guide practitioners in choosing when to ignore a certain source. These studies have done so in a synthetic setting, characterising sources using a large amount of data that is not available in practice. Some of these studies have also been shown to potentially suffer from bias in the benchmarks used in the analysis. In this study, we present a characterisation of harmful low-fidelity sources using only the limited data available to train a surrogate model. We employ recently developed benchmark filtering techniques to conduct a bias-free assessment, providing objectively varied benchmark suites of different sizes for future research. Analysing one of these benchmark suites with the technique known as Instance Space Analysis, we provide an intuitive visualisation of when a low-fidelity source should be used and use this analysis to provide guidelines that can be used in an applied industrial setting.

In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.

北京阿比特科技有限公司