亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study Dual-Primal Isogeometric Tearing and Interconnecting (IETI-DP) solvers for non-conforming multi-patch discretizations of a generalized Poisson problem. We realize the coupling between the patches using a symmetric interior penalty discontinuous Galerkin (SIPG) approach. Previously, we have assumed that the interfaces between patches always consist of whole edges. In this paper, we drop this requirement and allow T-junctions. This extension is vital for the consideration of sliding interfaces, for example between the rotor and the stator of an electrical motor. One critical part for the handling of T-junctions in IETI-DP solvers is the choice of the primal degrees of freedom. We propose to add all basis functions that are non-zero at any of the vertices to the primal space. Since there are several such basis functions at any T-junction, we call this concept ''fat vertices''. For this choice, we show a condition number bound that coincides with the bound for the conforming cas

相關內容

iOS 8 提供的應用間和應用跟系統的功能交互特性。
  • Today (iOS and OS X): widgets for the Today view of Notification Center
  • Share (iOS and OS X): post content to web services or share content with others
  • Actions (iOS and OS X): app extensions to view or manipulate inside another app
  • Photo Editing (iOS): edit a photo or video in Apple's Photos app with extensions from a third-party apps
  • Finder Sync (OS X): remote file storage in the Finder with support for Finder content annotation
  • Storage Provider (iOS): an interface between files inside an app and other apps on a user's device
  • Custom Keyboard (iOS): system-wide alternative keyboards

Source:

We consider the analysis and design of distributed wireless networks wherein remote radio heads (RRHs) coordinate transmissions to serve multiple users on the same resource block (RB). Specifically, we analyze two possible multiple-input multiple-output wireless fronthaul solutions: multicast and zero forcing (ZF) beamforming. We develop a statistical model for the fronthaul rate and, coupled with an analysis of the user access rate, we optimize the placement of the RRHs. This model allows us to formulate the location optimization problem with a statistical constraint on fronthaul outage. Our results are cautionary, showing that the fronthaul requires considerable bandwidth to enable joint service to users. This requirement can be relaxed by serving a low number of users on the same RB. Additionally, we show that, with a fixed number of antennas, for the multicast fronthaul, it is prudent to concentrate these antennas on a few RRHs. However, for the ZF beamforming fronthaul, it is better to distribute the antennas on more RRHs. For the parameters chosen, using a ZF beamforming fronthaul improves the typical access rate by approximately 8% compared to multicast. Crucially, our work quantifies the effect of these fronthaul solutions and provides an effective tool for the design of distributed networks.

We study the $k$-median with discounts problem, wherein we are given clients with non-negative discounts and seek to open at most $k$ facilities. The goal is to minimize the sum of distances from each client to its nearest open facility which is discounted by its own discount value, with minimum contribution being zero. $k$-median with discounts unifies many classic clustering problems, e.g., $k$-center, $k$-median, $k$-facility $l$-centrum, etc. We obtain a bi-criteria constant-factor approximation using an iterative LP rounding algorithm. Our result improves the previously best approximation guarantee for $k$-median with discounts [Ganesh et al., ICALP'21]. We also devise bi-criteria constant-factor approximation algorithms for the matroid and knapsack versions of median clustering with discounts.

This article introduces a new instrumental variable approach for estimating unknown population parameters with data having nonrandom missing values. With coarse and discrete instruments, Shao and Wang (2016) proposed a semiparametric method that uses the added information to identify the tilting parameter from the missing data propensity model. A naive application of this idea to continuous instruments through arbitrary discretizations is apt to be inefficient, and maybe questionable in some settings. We propose a nonparametric method not requiring arbitrary discretizations but involves scanning over continuous dichotomizations of the instrument; and combining scan statistics to estimate the unknown parameters via weighted integration. We establish the asymptotic normality of the proposed integrated estimator and that of the underlying scan processes uniformly across the instrument sample space. Simulation studies and the analysis of a real data set demonstrate the gains of the methodology over procedures that rely either on arbitrary discretizations or moments of the instrument.

Functional connectivity (FC) for quantifying interactions between regions of the brain is commonly estimated from functional magnetic resonance imaging (fMRI). There has been increasing interest in the potential of multimodal imaging to obtain more robust estimates of FC in high-dimensional settings. Recent work has found uses for graphical algorithms in combining fMRI signals with structural connectivity estimated from diffusion tensor imaging (DTI) for FC estimation. At the same time new algorithms focused on de novo identification of graphical subnetworks with significant levels of connectivity are finding other biological applications with great success. Such algorithms develop notions of graphical influence that aid in revealing subnetworks of interest while maintaining rigorous statistical control on discoveries. We develop a novel algorithm adapting some of these methods to FC estimation with computational efficiency and scalability. Our proposed algorithm leverages a graphical random walk on DTI data to define a new measure of structural influence that highlights connected components of maximal interest. The subnetwork topology is then compared to a suitable null hypothesis using permutation testing. Finally, individual discovered components are tested for significance. Extensive simulations show our method is comparable in power to those currently in use while being fast, robust, and simple to implement. We also analyze task-fMRI data from the Human Connectome Project database and find novel insights into brain interactions during the performance of a motor task. It is anticipated that the transparency and flexibility of our approach will prove valuable as further understanding of the structure-function relationship informs the future of network estimation. Scalability will also only become more important as neurological data become more granular and grow in dimension.

In the single winner determination problem, we have n voters and m candidates and each voter j incurs a cost c(i, j) if candidate i is chosen. Our objective is to choose a candidate that minimizes the expected total cost incurred by the voters; however as we only have access to the agents' preference rankings over the outcomes, a loss of efficiency is inevitable. This loss of efficiency is quantified by distortion. We give an instance of the metric single winner determination problem for which any randomized social choice function has distortion at least 2.063164. This disproves the long-standing conjecture that there exists a randomized social choice function that has a worst-case distortion of at most 2.

Clustering categorical distributions in the finite-dimensional probability simplex is a fundamental task met in many applications dealing with normalized histograms. Traditionally, the differential-geometric structures of the probability simplex have been used either by (i) setting the Riemannian metric tensor to the Fisher information matrix of the categorical distributions, or (ii) defining the dualistic information-geometric structure induced by a smooth dissimilarity measure, the Kullback-Leibler divergence. In this work, we introduce for clustering tasks a novel computationally-friendly framework for modeling geometrically the probability simplex: The {\em Hilbert simplex geometry}. In the Hilbert simplex geometry, the distance is the non-separable Hilbert's metric distance which satisfies the property of information monotonicity with distance level set functions described by polytope boundaries. We show that both the Aitchison and Hilbert simplex distances are norm distances on normalized logarithmic representations with respect to the $\ell_2$ and variation norms, respectively. We discuss the pros and cons of those different statistical modelings, and benchmark experimentally these different kind of geometries for center-based $k$-means and $k$-center clustering. Furthermore, since a canonical Hilbert distance can be defined on any bounded convex subset of the Euclidean space, we also consider Hilbert's geometry of the elliptope of correlation matrices and study its clustering performances compared to Fr\"obenius and log-det divergences.

We study the problem of estimating the size of maximum matching and minimum vertex cover in sublinear time. Denoting the number of vertices by $n$ and the average degree in the graph by $\bar{d}$, we obtain the following results for both problems: * A multiplicative $(2+\epsilon)$-approximation that takes $\tilde{O}(n/\epsilon^2)$ time using adjacency list queries. * A multiplicative-additive $(2, \epsilon n)$-approximation in $\tilde{O}((\bar{d} + 1)/\epsilon^2)$ time using adjacency list queries. * A multiplicative-additive $(2, \epsilon n)$-approximation in $\tilde{O}(n/\epsilon^{3})$ time using adjacency matrix queries. All three results are provably time-optimal up to polylogarithmic factors culminating a long line of work on these problems. Our main contribution and the key ingredient leading to the bounds above is a new and near-tight analysis of the average query complexity of the randomized greedy maximal matching algorithm which improves upon a seminal result of Yoshida, Yamamoto, and Ito [STOC'09].

The geometric median of a tuple of vectors is the vector that minimizes the sum of Euclidean distances to the vectors of the tuple. Classically called the Fermat-Weber problem and applied to facility location, it has become a major component of the robust learning toolbox. It is typically used to aggregate the (processed) inputs of different data providers, whose motivations may diverge, especially in applications like content moderation. Interestingly, as a voting system, the geometric median has well-known desirable properties: it is a provably good average approximation, it is robust to a minority of malicious voters, and it satisfies the "one voter, one unit force" fairness principle. However, what was not known is the extent to which the geometric median is strategyproof. Namely, can a strategic voter significantly gain by misreporting their preferred vector? We prove in this paper that, perhaps surprisingly, the geometric median is not even $\alpha$-strategyproof, where $\alpha$ bounds what a voter can gain by deviating from truthfulness. But we also prove that, in the limit of a large number of voters with i.i.d. preferred vectors, the geometric median is asymptotically $\alpha$-strategyproof. We show how to compute this bound $\alpha$. We then generalize our results to voters who care more about some dimensions. Roughly, we show that, if some dimensions are more polarized and regarded as more important, then the geometric median becomes less strategyproof. Interestingly, we also show how the skewed geometric medians can improve strategyproofness. Nevertheless, if voters care differently about different dimensions, we prove that no skewed geometric median can achieve strategyproofness for all. Overall, our results constitute a coherent set of insights into the extent to which the geometric median is suitable to aggregate high-dimensional disagreements.

Alternating Direction Method of Multipliers (ADMM) is a widely used tool for machine learning in distributed settings, where a machine learning model is trained over distributed data sources through an interactive process of local computation and message passing. Such an iterative process could cause privacy concerns of data owners. The goal of this paper is to provide differential privacy for ADMM-based distributed machine learning. Prior approaches on differentially private ADMM exhibit low utility under high privacy guarantee and often assume the objective functions of the learning problems to be smooth and strongly convex. To address these concerns, we propose a novel differentially private ADMM-based distributed learning algorithm called DP-ADMM, which combines an approximate augmented Lagrangian function with time-varying Gaussian noise addition in the iterative process to achieve higher utility for general objective functions under the same differential privacy guarantee. We also apply the moments accountant method to bound the end-to-end privacy loss. The theoretical analysis shows that DP-ADMM can be applied to a wider class of distributed learning problems, is provably convergent, and offers an explicit utility-privacy tradeoff. To our knowledge, this is the first paper to provide explicit convergence and utility properties for differentially private ADMM-based distributed learning algorithms. The evaluation results demonstrate that our approach can achieve good convergence and model accuracy under high end-to-end differential privacy guarantee.

In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.

北京阿比特科技有限公司