亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We propose a novel $\ell_1+\ell_2$-penalty, which we refer to as the Generalized Elastic Net, for regression problems where the feature vectors are indexed by vertices of a given graph and the true signal is believed to be smooth or piecewise constant with respect to this graph. Under the assumption of correlated Gaussian design, we derive upper bounds for the prediction and estimation errors, which are graph-dependent and consist of a parametric rate for the unpenalized portion of the regression vector and another term that depends on our network alignment assumption. We also provide a coordinate descent procedure based on the Lagrange dual objective to compute this estimator for large-scale problems. Finally, we compare our proposed estimator to existing regularized estimators on a number of real and synthetic datasets and discuss its potential limitations.

相關內容

Standard neural networks struggle to generalize under distribution shifts in computer vision. Fortunately, combining multiple networks can consistently improve out-of-distribution generalization. In particular, weight averaging (WA) strategies were shown to perform best on the competitive DomainBed benchmark; they directly average the weights of multiple networks despite their nonlinearities. In this paper, we propose Diverse Weight Averaging (DiWA), a new WA strategy whose main motivation is to increase the functional diversity across averaged models. To this end, DiWA averages weights obtained from several independent training runs: indeed, models obtained from different runs are more diverse than those collected along a single run thanks to differences in hyperparameters and training procedures. We motivate the need for diversity by a new bias-variance-covariance-locality decomposition of the expected error, exploiting similarities between WA and standard functional ensembling. Moreover, this decomposition highlights that WA succeeds when the variance term dominates, which we show occurs when the marginal distribution changes at test time. Experimentally, DiWA consistently improves the state of the art on DomainBed without inference overhead.

We consider a variant of the hide-and-seek game in which a seeker inspects multiple hiding locations to find multiple items hidden by a hider. Each hiding location has a maximum hiding capacity and a probability of detecting its hidden items when an inspection by the seeker takes place. The objective of the seeker (resp. hider) is to minimize (resp. maximize) the expected number of undetected items. This model is motivated by strategic inspection problems, where a security agency is tasked with coordinating multiple inspection resources to detect and seize illegal commodities hidden by a criminal organization. To solve this large-scale zero-sum game, we leverage its structure and show that its mixed strategies Nash equilibria can be characterized using their unidimensional marginal distributions, which are Nash equilibria of a lower dimensional continuous zero-sum game. This leads to a two-step approach for efficiently solving our hide-and-seek game: First, we analytically solve the continuous game and compute the equilibrium marginal distributions. Second, we derive a combinatorial algorithm to coordinate the players' resources and compute equilibrium mixed strategies that satisfy the marginal distributions. We show that this solution approach computes a Nash equilibrium of the hide-and-seek game in quadratic time with linear support. Our analysis reveals a complex interplay between the game parameters and allows us to evaluate their impact on the players' behaviors in equilibrium and the criticality of each location.

Visualization plays a vital role in making sense of complex network data. Recent studies have shown the potential of using extended reality (XR) for the immersive exploration of networks. The additional depth cues offered by XR help users perform better in certain tasks when compared to using traditional desktop setups. However, prior works on immersive network visualization rely on mostly static graph layouts to present the data to the user. This poses a problem since there is no optimal layout for all possible tasks. The choice of layout heavily depends on the type of network and the task at hand. We introduce a multi-layout approach that allows users to effectively explore hierarchical network data in immersive space. The resulting system leverages different layout techniques and interactions to efficiently use the available space in VR and provide an optimal view of the data depending on the task and the level of detail required to solve it. To evaluate our approach, we have conducted a user study comparing it against the state of the art for immersive network visualization. Participants performed tasks at varying spatial scopes. The results show that our approach outperforms the baseline in spatially focused scenarios as well as when the whole network needs to be considered.

We study the set of optimal solutions of the dual linear programming formulation of the linear assignment problem (LAP) to propose a method for computing a solution from the relative interior of this set. Assuming that an arbitrary dual-optimal solution and an optimal assignment are available (for which many efficient algorithms already exist), our method computes a relative-interior solution in linear time. Since LAP occurs as a subproblem in the linear programming relaxation of quadratic assignment problem (QAP), we employ our method as a new component in the family of dual-ascent algorithms that provide bounds on the optimal value of QAP. To make our results applicable to incomplete QAP, which is of interest in practical use-cases, we also provide a linear-time reduction from incomplete LAP to complete LAP along with a mapping that preserves optimality and membership in the relative interior. Our experiments on publicly available benchmarks indicate that our approach with relative-interior solution is frequently capable of providing superior bounds and otherwise is at least comparable.

This paper presents the equilibrium analysis of a game composed of heterogeneous electric vehicles (EVs) and a power distribution system operator (DSO) as the players, and charging station operators (CSOs) and a transportation network operator (TNO) as coordinators. Each EV tries to pick a charging station as its destination and a route to get there at the same time. However, the traffic and electrical load congestion on the roads and charging stations lead to the interdependencies between the optimal decisions of EVs. CSOs and the TNO need to apply some tolling to control such congestion. On the other hand, the pricing at charging stations depends on real-time distributional locational marginal pricing, which is determined by the DSO after solving the optimal power flow over the power distribution network. This paper also takes into account the local and the coupling/infrastructure constraints of EVs, transportation and distribution networks. This problem is modeled as a generalized aggregative game, and then a decentralized learning method is proposed to obtain an equilibrium point of the game, which is known as variational generalized Wardrop equilibrium. The existence of such an equilibrium point and the convergence of the proposed algorithm to it are proven. We undertake numerical studies on the Savannah city model and the IEEE 33-bus distribution network and investigate the impact of various characteristics on demand and prices.

We propose a family of mixed finite elements that are robust for the nearly incompressible strain gradient model, which is a fourth-order singular perturbed elliptic system. The element is similar to [C. Taylor and P. Hood, Comput. & Fluids, 1(1973), 73-100] in the Stokes flow. Using a uniform discrete B-B inequality for the mixed finite element pairs, we show the optimal rate of convergence that is robust in the incompressible limit. By a new regularity result that is uniform in both the materials parameter and the incompressibility, we prove the method converges with $1/2$ order to the solution with strong boundary layer effects. Moreover, we estimate the convergence rate of the numerical solution to the unperturbed second-order elliptic system. Numerical results for both smooth solutions and the solutions with sharp layers confirm the theoretical prediction.

In this paper, we apply the median-of-means principle to derive robust versions of local averaging rules in non-parametric regression. For various estimates, including nearest neighbors and kernel procedures, we obtain non-asymptotic exponential inequalities, with only a second moment assumption on the noise. We then show that these bounds cannot be significantly improved by establishing a corresponding lower bound on tail probabilities.

The estimation of the potential impact fraction (including the population attributable fraction) with continuous exposure data frequently relies on strong distributional assumptions. However, these assumptions are often violated if the underlying exposure distribution is unknown or if the same distribution is assumed across time or space. Nonparametric methods to estimate the potential impact fraction are available for cohort data, but no alternatives exist for cross-sectional data. In this article, we discuss the impact of distributional assumptions in the estimation of the population impact fraction, showing that under an infinite set of possibilities, distributional violations lead to biased estimates. We propose nonparametric methods to estimate the potential impact fraction for aggregated (mean and standard deviation) or individual data (e.g. observations from a cross-sectional population survey), and develop simulation scenarios to compare their performance against standard parametric procedures. We illustrate our methodology on an application of sugar-sweetened beverage consumption on incidence of type 2 diabetes. We also present an R package pifpaf to implement these methods.

Seeking the equivalent entities among multi-source Knowledge Graphs (KGs) is the pivotal step to KGs integration, also known as \emph{entity alignment} (EA). However, most existing EA methods are inefficient and poor in scalability. A recent summary points out that some of them even require several days to deal with a dataset containing 200,000 nodes (DWY100K). We believe over-complex graph encoder and inefficient negative sampling strategy are the two main reasons. In this paper, we propose a novel KG encoder -- Dual Attention Matching Network (Dual-AMN), which not only models both intra-graph and cross-graph information smartly, but also greatly reduces computational complexity. Furthermore, we propose the Normalized Hard Sample Mining Loss to smoothly select hard negative samples with reduced loss shift. The experimental results on widely used public datasets indicate that our method achieves both high accuracy and high efficiency. On DWY100K, the whole running process of our method could be finished in 1,100 seconds, at least 10* faster than previous work. The performances of our method also outperform previous works across all datasets, where Hits@1 and MRR have been improved from 6% to 13%.

Deep convolutional neural networks (CNNs) have recently achieved great success in many visual recognition tasks. However, existing deep neural network models are computationally expensive and memory intensive, hindering their deployment in devices with low memory resources or in applications with strict latency requirements. Therefore, a natural thought is to perform model compression and acceleration in deep networks without significantly decreasing the model performance. During the past few years, tremendous progress has been made in this area. In this paper, we survey the recent advanced techniques for compacting and accelerating CNNs model developed. These techniques are roughly categorized into four schemes: parameter pruning and sharing, low-rank factorization, transferred/compact convolutional filters, and knowledge distillation. Methods of parameter pruning and sharing will be described at the beginning, after that the other techniques will be introduced. For each scheme, we provide insightful analysis regarding the performance, related applications, advantages, and drawbacks etc. Then we will go through a few very recent additional successful methods, for example, dynamic capacity networks and stochastic depths networks. After that, we survey the evaluation matrix, the main datasets used for evaluating the model performance and recent benchmarking efforts. Finally, we conclude this paper, discuss remaining challenges and possible directions on this topic.

北京阿比特科技有限公司