亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The classical house allocation problem involves assigning $n$ houses (or items) to $n$ agents according to their preferences. A key criterion in such problems is satisfying some fairness constraints such as envy-freeness. We consider a generalization of this problem wherein the agents are placed along the vertices of a graph (corresponding to a social network), and each agent can only experience envy towards its neighbors. Our goal is to minimize the aggregate envy among the agents as a natural fairness objective, i.e., the sum of all pairwise envy values over all edges in a social graph. When agents have identical and evenly-spaced valuations, our problem reduces to the well-studied problem of linear arrangements. For identical valuations with possibly uneven spacing, we show a number of deep and surprising ways in which our setting is a departure from this classical problem. More broadly, we contribute several structural and computational results for various classes of graphs, including NP-hardness results for disjoint unions of paths, cycles, stars, or cliques, and fixed-parameter tractable (and, in some cases, polynomial-time) algorithms for paths, cycles, stars, cliques, and their disjoint unions. Additionally, a conceptual contribution of our work is the formulation of a structural property for disconnected graphs that we call separability which results in efficient parameterized algorithms for finding optimal allocations.

相關內容

Designing coresets--small-space sketches of the data preserving cost of the solutions within $(1\pm \epsilon)$-approximate factor--is an important research direction in the study of center-based $k$-clustering problems, such as $k$-means or $k$-median. Feldman and Langberg [STOC'11] have shown that for $k$-clustering of $n$ points in general metrics, it is possible to obtain coresets whose size depends logarithmically in $n$. Moreover, such a dependency in $n$ is inevitable in general metrics. A significant amount of recent work in the area is devoted to obtaining coresests whose sizes are independent of $n$ (i.e., ``small'' coresets) for special metrics, like $d$-dimensional Euclidean spaces, doubling metrics, metrics of graphs of bounded treewidth, or those excluding a fixed minor. In this paper, we provide the first constructions of small coresets for $k$-clustering in the metrics induced by geometric intersection graphs, such as Euclidean-weighted Unit Disk/Square Graphs. These constructions follow from a general theorem that identifies two canonical properties of a graph metric sufficient for obtaining small coresets. The proof of our theorem builds on the recent work of Cohen-Addad, Saulpic, and Schwiegelshohn [STOC '21], which ensures small-sized coresets conditioned on the existence of an interesting set of centers, called ``centroid set''. The main technical contribution of our work is the proof of the existence of such a small-sized centroid set for graphs that satisfy the two canonical geometric properties. The new coreset construction helps to design the first $(1+\epsilon)$-approximation for center-based clustering problems in UDGs and USGs, that is fixed-parameter tractable in $k$ and $\epsilon$ (FPT-AS).

When analyzing complex networks, an important task is the identification of those nodes which play a leading role for the overall communicability of the network. In the context of modifying networks (or making them robust against targeted attacks or outages), it is also relevant to know how sensitive the network's communicability reacts to changes in certain nodes or edges. Recently, the concept of total network sensitivity was introduced in [O. De la Cruz Cabrera, J. Jin, S. Noschese, L. Reichel, Communication in complex networks, Appl. Numer. Math., 172, pp. 186-205, 2022], which allows to measure how sensitive the total communicability of a network is to the addition or removal of certain edges. One shortcoming of this concept is that sensitivities are extremely costly to compute when using a straight-forward approach (orders of magnitude more expensive than the corresponding communicability measures). In this work, we present computational procedures for estimating network sensitivity with a cost that is essentially linear in the number of nodes for many real-world complex networks. Additionally, we extend the sensitivity concept such that it also covers sensitivity of subgraph centrality and the Estrada index, and we discuss the case of node removal. We propose a priori bounds for these sensitivities which capture the qualitative behavior well and give insight into the general behavior of matrix function based network indices under perturbations. These bounds are based on decay results for Fr\'echet derivatives of matrix functions with structured, low-rank direction terms which might be of independent interest also for other applications than network analysis.

In this paper, the first large-scale application of multiscale-spectral generalized finite element methods (MS-GFEM) to composite aero-structures is presented. The crucial novelty lies in the introduction of A-harmonicity in the local approximation spaces, which in contrast to [Babuska, Lipton, Multiscale Model. Simul. 9, 2011] is enforced more efficiently via a constraint in the local eigenproblems. This significant modification leads to excellent approximation properties, which turn out to be essential to capture accurately material strains and stresses with a low dimensional approximation space, hence maximising model order reduction. The implementation of the framework in the DUNE software package, as well as a detailed description of all components of the method are presented and exemplified on a composite laminated beam under compressive loading. The excellent parallel scalability of the method, as well as its superior performance compared to the related, previously introduced GenEO method are demonstrated on two realistic application cases, including a C-shaped wing spar with complex geometry. Further, by allowing low-cost approximate solves for closely related models or geometries this efficient, novel technology provides the basis for future applications in optimisation or uncertainty quantification on challenging problems in composite aero-structures.

Marginal likelihood, also known as model evidence, is a fundamental quantity in Bayesian statistics. It is used for model selection using Bayes factors or for empirical Bayes tuning of prior hyper-parameters. Yet, the calculation of evidence has remained a longstanding open problem in Gaussian graphical models. Currently, the only feasible solutions that exist are for special cases such as the Wishart or G-Wishart, in moderate dimensions. We develop an approach based on a novel telescoping block decomposition of the precision matrix that allows the estimation of evidence by application of Chib's technique under a very broad class of priors under mild requirements. Specifically, the requirements are: (a) the priors on the diagonal terms on the precision matrix can be written as gamma or scale mixtures of gamma random variables and (b) those on the off-diagonal terms can be represented as normal or scale mixtures of normal. This includes structured priors such as the Wishart or G-Wishart, and more recently introduced element-wise priors, such as the Bayesian graphical lasso and the graphical horseshoe. Among these, the true marginal is known in an analytically closed form for Wishart, providing a useful validation of our approach. For the general setting of the other three, and several more priors satisfying conditions (a) and (b) above, the calculation of evidence has remained an open question that this article resolves under a unifying framework.

Motivated by applications such as machine repair, project monitoring, and anti-poaching patrol scheduling, we study intervention planning of stochastic processes under resource constraints. This planning problem has previously been modeled as restless multi-armed bandits (RMAB), where each arm is an intervention-dependent Markov Decision Process. However, the existing literature assumes all intervention resources belong to a single uniform pool, limiting their applicability to real-world settings where interventions are carried out by a set of workers, each with their own costs, budgets, and intervention effects. In this work, we consider a novel RMAB setting, called multi-worker restless bandits (MWRMAB) with heterogeneous workers. The goal is to plan an intervention schedule that maximizes the expected reward while satisfying budget constraints on each worker as well as fairness in terms of the load assigned to each worker. Our contributions are two-fold: (1) we provide a multi-worker extension of the Whittle index to tackle heterogeneous costs and per-worker budget and (2) we develop an index-based scheduling policy to achieve fairness. Further, we evaluate our method on various cost structures and show that our method significantly outperforms other baselines in terms of fairness without sacrificing much in reward accumulated.

Traffic systems are multi-agent cyber-physical systems whose performance is closely related to human welfare. They work in open environments and are subject to uncertainties from various sources, making their performance hard to verify by traditional model-based approaches. Alternatively, statistical model checking (SMC) can verify their performance by sequentially drawing sample data until the correctness of a performance specification can be inferred with desired statistical accuracy. This work aims to verify traffic systems with privacy, motivated by the fact that the data used may include personal information (e.g., daily itinerary) and get leaked unintendedly by observing the execution of the SMC algorithm. To formally capture data privacy in SMC, we introduce the concept of expected differential privacy (EDP), which constrains how much the algorithm execution can change in the expectation sense when data change. Accordingly, we introduce an exponential randomization mechanism for the SMC algorithm to achieve the EDP. Our case study on traffic intersections by Vissim simulation shows the high accuracy of SMC in traffic model verification without significantly sacrificing computing efficiency. The case study also shows EDP successfully bounding the algorithm outputs to guarantee privacy.

Lloyd Shapley's cooperative value allocation theory is a central concept in game theory that is widely used in various fields to allocate resources, assess individual contributions, and determine fairness. The Shapley value formula and his four axioms that characterize it form the foundation of the theory. Shapley value can be assigned only when all cooperative game players are assumed to eventually form the grand coalition. The purpose of this paper is to extend Shapley's theory to cover value allocation at every partial coalition state. To achieve this, we first extend Shapley axioms into a new set of five axioms that can characterize value allocation at every partial coalition state, where the allocation at the grand coalition coincides with the Shapley value. Second, we present a stochastic path integral formula, where each path now represents a general coalition process. This can be viewed as an extension of the Shapley formula. We apply these concepts to provide a dynamic interpretation and extension of the value allocation schemes of Shapley, Nash, Kohlberg and Neyman. This generalization is made possible by taking into account Hodge calculus, stochastic processes, and path integration of edge flows on graphs. We recognize that such generalization is not limited to the coalition game graph. As a result, we define Hodge allocation, a general allocation scheme that can be applied to any cooperative multigraph and yield allocation values at any cooperative stage.

This PhD thesis contains several contributions to the field of statistical causal modeling. Statistical causal models are statistical models embedded with causal assumptions that allow for the inference and reasoning about the behavior of stochastic systems affected by external manipulation (interventions). This thesis contributes to the research areas concerning the estimation of causal effects, causal structure learning, and distributionally robust (out-of-distribution generalizing) prediction methods. We present novel and consistent linear and non-linear causal effects estimators in instrumental variable settings that employ data-dependent mean squared prediction error regularization. Our proposed estimators show, in certain settings, mean squared error improvements compared to both canonical and state-of-the-art estimators. We show that recent research on distributionally robust prediction methods has connections to well-studied estimators from econometrics. This connection leads us to prove that general K-class estimators possess distributional robustness properties. We, furthermore, propose a general framework for distributional robustness with respect to intervention-induced distributions. In this framework, we derive sufficient conditions for the identifiability of distributionally robust prediction methods and present impossibility results that show the necessity of several of these conditions. We present a new structure learning method applicable in additive noise models with directed trees as causal graphs. We prove consistency in a vanishing identifiability setup and provide a method for testing substructure hypotheses with asymptotic family-wise error control that remains valid post-selection. Finally, we present heuristic ideas for learning summary graphs of nonlinear time-series models.

In the last decade or so, we have witnessed deep learning reinvigorating the machine learning field. It has solved many problems in the domains of computer vision, speech recognition, natural language processing, and various other tasks with state-of-the-art performance. The data is generally represented in the Euclidean space in these domains. Various other domains conform to non-Euclidean space, for which graph is an ideal representation. Graphs are suitable for representing the dependencies and interrelationships between various entities. Traditionally, handcrafted features for graphs are incapable of providing the necessary inference for various tasks from this complex data representation. Recently, there is an emergence of employing various advances in deep learning to graph data-based tasks. This article provides a comprehensive survey of graph neural networks (GNNs) in each learning setting: supervised, unsupervised, semi-supervised, and self-supervised learning. Taxonomy of each graph based learning setting is provided with logical divisions of methods falling in the given learning setting. The approaches for each learning task are analyzed from both theoretical as well as empirical standpoints. Further, we provide general architecture guidelines for building GNNs. Various applications and benchmark datasets are also provided, along with open challenges still plaguing the general applicability of GNNs.

This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.

北京阿比特科技有限公司