亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This article deals with the location problem for balancing the service efficiency and equality. In public service systems, some people may feel envy in case that they need longer travel distance to access services than others. The strength of the envy can be measured by comparing one's travel distance to service facility with a threshold distance. Using the total envy function, four extended p-median problems are proposed for trade-off between service efficiency and equality. Five analytical properties of the new problems are mathematically proven. The new problems were tested on three sets of well-designed instances. The experimentation shows that the equality measures, such as the standard deviation, the mean absolute deviation, and the Gini coefficient between travel distances, can be substantially improved by minimizing the travel cost and the spatial envy. The experimentation also shows that, when the service supply is given in terms of the number of facilities, the service equality can be considerably improved by slightly increasing the travel distance. When the service supply is increased in terms of the number of facilities, both the service efficiency and spatial equality can be significantly improved.

相關內容

Feel,是一款科學地(di)激(ji)勵(li)用戶(hu)實現(xian)健(jian)康生(sheng)活目標的(de)(de)應用。 想要減肥,塑形,增高,提(ti)升活力,睡個(ge)好覺,產后恢復……?針對不同(tong)的(de)(de)目標,Feel為您定制個(ge)性化的(de)(de)健(jian)康生(sheng)活計劃,并通過各種記錄(lu)工具和(he)激(ji)勵(li)手段幫您實現(xian)目標。

This research article discusses a numerical solution of the radiative transfer equation based on the weak Galerkin finite element method. We discretize the angular variable by means of the discrete-ordinate method. Then the resulting semi-discrete hyperbolic system is approximated using the weak Galerkin method. The stability result for the proposed numerical method is devised. A priori error analysis is established under the suitable norm. In order to examine the theoretical results, numerical experiments are carried out.

In social choice theory with ordinal preferences, a voting method satisfies the axiom of positive involvement if adding to a preference profile a voter who ranks an alternative uniquely first cannot cause that alternative to go from winning to losing. In this note, we prove a new impossibility theorem concerning this axiom: there is no ordinal voting method satisfying positive involvement that also satisfies the Condorcet winner and loser criteria, resolvability, and a common invariance property for Condorcet methods, namely that the choice of winners depends only on the ordering of majority margins by size.

Context: Experiment replications play a central role in the scientific method. Although software engineering experimentation has matured a great deal, the number of experiment replications is still relatively small. Software engineering experiments are composed of complex concepts, procedures and artefacts. Laboratory packages are a means of transfer-ring knowledge among researchers to facilitate experiment replications. Objective: This paper investigates the experiment replication process to find out what information is needed to successfully replicate an experiment. Our objective is to propose the content and structure of laboratory packages for software engineering experiments. Method: We evaluated seven replications of three different families of experiments. Each replication had a different experimenter who was, at the time, unfamiliar with the experi-ment. During the first iterations of the study, we identified experimental incidents and then proposed a laboratory package structure that addressed these incidents, including docu-ment usability improvements. We used the later iterations to validate and generalize the laboratory package structure for use in all software engineering experiments. We aimed to solve a specific problem, while at the same time looking at how to contribute to the body of knowledge on laboratory packages. Results: We generated a laboratory package for three different experiments. These packages eased the replication of the respective experiments. The evaluation that we conducted shows that the laboratory package proposal is acceptable and reduces the effort currently required to replicate experiments in software engineering. Conclusion: We think that the content and structure that we propose for laboratory pack-ages can be useful for other software engineering experiments.

We propose a novel neural network architecture based on conformer transducer that adds contextual information flow to the ASR systems. Our method improves the accuracy of recognizing uncommon words while not harming the word error rate of regular words. We explore the uncommon words accuracy improvement when we use the new model and/or shallow fusion with context language model. We found that combination of both provides cumulative gain in uncommon words recognition accuracy.

Accelerated life-tests (ALTs) are used for inferring lifetime characteristics of highly reliable products. In particular, step-stress ALTs increase the stress level at which units under test are subject at certain pre-fixed times, thus accelerating the product's wear and inducing its failure. In some cases, due to cost or product nature constraints, continuous monitoring of devices is infeasible, and so the units are inspected for failures at particular inspection time points. In a such setup, the ALT response is interval-censored. Furthermore, when a test unit fails, there are often more than one fatal cause for the failure, known as competing risks. In this paper, we assume that all competing risks are independent and follow exponential distributions with scale parameters depending on the stress level. Under this setup, we present a family of robust estimators based on density power divergence, including the classical maximum likelihood estimator (MLE) as a particular case. We derive asymptotic and robustness properties of the Minimum Density Power Divergence Estimator (MDPDE), showing its consistency for large samples. Based on these MDPDEs, estimates of the lifetime characteristics of the product as well as estimates of cause-specific lifetime characteristics are then developed. Direct asymptotic, transformed and, bootstrap confidence intervals for the mean lifetime to failure, reliability at a mission time and, distribution quantiles are proposed, and their performance is then compared through Monte Carlo simulations. Moreover, the performance of the MDPDE family has been examined through an extensive numerical study and the methods of inference discussed here are finally illustrated with a real-data example concerning electronic devices.

Quantum networks crucially rely on the availability of high-quality entangled pairs of qubits, known as entangled links, distributed across distant nodes. Maintaining the quality of these links is a challenging task due to the presence of time-dependent noise, also known as decoherence. Entanglement purification protocols offer a solution by converting multiple low-quality entangled states into a smaller number of higher-quality ones. In this work, we introduce a framework to analyse the performance of entanglement buffering setups that combine entanglement consumption, decoherence, and entanglement purification. We propose two key metrics: the availability, which is the steady-state probability that an entangled link is present, and the average consumed fidelity, which quantifies the steady-state quality of consumed links. We then investigate a two-node system, where each node possesses two quantum memories: one for long-term entanglement storage, and another for entanglement generation. We model this setup as a continuous-time stochastic process and derive analytical expressions for the performance metrics. Our findings unveil a trade-off between the availability and the average consumed fidelity. We also bound these performance metrics for a buffering system that employs the well-known bilocal Clifford purification protocols. Importantly, our analysis demonstrates that, in the presence of noise, consistently purifying the buffered entanglement increases the average consumed fidelity, even when some buffered entanglement is discarded due to purification failures.

Dephasing is a prominent noise mechanism that afflicts quantum information carriers, and it is one of the main challenges towards realizing useful quantum computation, communication, and sensing. Here we consider discrimination and estimation of bosonic dephasing channels, when using the most general adaptive strategies allowed by quantum mechanics. We reduce these difficult quantum problems to simple classical ones based on the probability densities defining the bosonic dephasing channels. By doing so, we rigorously establish the optimal performance of various distinguishability and estimation tasks and construct explicit strategies to achieve this performance. To the best of our knowledge, this is the first example of a non-Gaussian bosonic channel for which there are exact solutions for these tasks.

Objective: Prediction models are popular in medical research and practice. By predicting an outcome of interest for specific patients, these models may help inform difficult treatment decisions, and are often hailed as the poster children for personalized, data-driven healthcare. Many prediction models are deployed for decision support based on their prediction accuracy in validation studies. We investigate whether this is a safe and valid approach. Materials and Methods: We show that using prediction models for decision making can lead to harmful decisions, even when the predictions exhibit good discrimination after deployment. These models are harmful self-fulfilling prophecies: their deployment harms a group of patients but the worse outcome of these patients does not invalidate the predictive power of the model. Results: Our main result is a formal characterization of a set of such prediction models. Next we show that models that are well calibrated before and after deployment are useless for decision making as they made no change in the data distribution. Discussion: Our results point to the need to revise standard practices for validation, deployment and evaluation of prediction models that are used in medical decisions. Conclusion: Outcome prediction models can yield harmful self-fulfilling prophecies when used for decision making, a new perspective on prediction model development, deployment and monitoring is needed.

Online customer data provides valuable information for product design and marketing research, as it can reveal the preferences of customers. However, analyzing these data using artificial intelligence (AI) for data-driven design is a challenging task due to potential concealed patterns. Moreover, in these research areas, most studies are only limited to finding customers' needs. In this study, we propose a game theory machine learning (ML) method that extracts comprehensive design implications for product development. The method first uses a genetic algorithm to select, rank, and combine product features that can maximize customer satisfaction based on online ratings. Then, we use SHAP (SHapley Additive exPlanations), a game theory method that assigns a value to each feature based on its contribution to the prediction, to provide a guideline for assessing the importance of each feature for the total satisfaction. We apply our method to a real-world dataset of laptops from Kaggle, and derive design implications based on the results. Our approach tackles a major challenge in the field of multi-criteria decision making and can help product designers and marketers, to understand customer preferences better with less data and effort. The proposed method outperforms benchmark methods in terms of relevant performance metrics.

In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.

北京阿比特科技有限公司