亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper develops a new integrated ball (pseudo)metric which provides an intermediary between a chosen starting (pseudo)metric d and the L_p distance in general function spaces. Selecting d as the Hausdorff or Fr\'echet distances, we introduce integrated shape-sensitive versions of these supremum-based metrics. The new metrics allow for finer analyses in functional settings, not attainable applying the non-integrated versions directly. Moreover, convergent discrete approximations make computations feasible in practice.

相關內容

Integration:Integration, the VLSI Journal。 Explanation:集(ji)成,VLSI雜志。 Publisher:Elsevier。 SIT:

Matrices resulting from the discretization of a kernel function, e.g., in the context of integral equations or sampling probability distributions, can frequently be approximated by interpolation. In order to improve the efficiency, a multi-level approach can be employed that involves interpolating the kernel function and its approximations multiple times. This article presents a new approach to analyze the error incurred by these iterated interpolation procedures that is considerably more elegant than its predecessors and allows us to treat not only the kernel function itself, but also its derivatives.

In this article, we define a new non-archimedean metric structure, called cophenetic metric, on persistent homology classes of all degrees. We then show that zeroth persistent homology together with the cophenetic metric and hierarchical clustering algorithms with a number of different metrics do deliver statistically verifiable commensurate topological information based on experimental results we obtained on different datasets. We also observe that the resulting clusters coming from cophenetic distance do shine in terms of different evaluation measures such as silhouette score and the Rand index. Moreover, since the cophenetic metric is defined for all homology degrees, one can now display the inter-relations of persistent homology classes in all degrees via rooted trees.

Imperative session types provide an imperative interface to session-typed communication. In such an interface, channel references are first-class objects with operations that change the typestate of the channel. Compared to functional session type APIs, the program structure is simpler at the surface, but typestate is required to model the current state of communication throughout. Following an early work that explored the imperative approach, a significant body of work on session types has neglected the imperative approach and opts for a functional approach that uses linear types to manage channel references soundly. We demonstrate that the functional approach subsumes the early work on imperative session types by exhibiting a typing and semantics preserving translation into a system of linear functional session types. We further show that the untyped backwards translation from the functional to the imperative calculus is semantics preserving. We restrict the type system of the functional calculus such that the backwards translation becomes type preserving. Thus, we precisely capture the difference in expressiveness of the two calculi and conclude that the lack of expressiveness in the imperative calculus is largely due to restrictions imposed by its type system.

This paper studies the transmit beamforming in a downlink integrated sensing and communication (ISAC) system, where a base station (BS) equipped with a uniform linear array (ULA) sends combined information-bearing and dedicated radar signals to simultaneously perform downlink multiuser communication and radar target sensing. Under this setup, we maximize the radar sensing performance (in terms of minimizing the beampattern matching errors or maximizing the minimum beampattern gains), subject to the communication users' minimum signal-to-interference-plus-noise ratio (SINR) requirements and the BS's transmit power constraints. In particular, we consider two types of communication receivers, namely Type-I and Type-II receivers, which do not have and do have the capability of cancelling the interference from the {\emph{a-priori}} known dedicated radar signals, respectively. Under both Type-I and Type-II receivers, the beampattern matching and minimum beampattern gain maximization problems are globally optimally solved via applying the semidefinite relaxation (SDR) technique together with the rigorous proof of the tightness of SDR for both Type-I and Type-II receivers under the two design criteria. It is shown that at the optimality, dedicated radar signals are not required with Type-I receivers under some specific conditions, while dedicated radar signals are always needed to enhance the performance with Type-II receivers. Numerical results show that the minimum beampattern gain maximization leads to significantly higher beampattern gains at the worst-case sensing angles with a much lower computational complexity than the beampattern matching design. It is also shown that by exploiting the capability of canceling the interference caused by the radar signals, the case with Type-II receivers results in better sensing performance than that with Type-I receivers and other conventional designs.

Several non-linear functions and machine learning methods have been developed for flexible specification of the systematic utility in discrete choice models. However, they lack interpretability, do not ensure monotonicity conditions, and restrict substitution patterns. We address the first two challenges by modelling the systematic utility using the Choquet Integral (CI) function and the last one by embedding CI into the multinomial probit (MNP) choice probability kernel. We also extend the MNP-CI model to account for attribute cut-offs that enable a modeller to approximately mimic the semi-compensatory behaviour using the traditional choice experiment data. The MNP-CI model is estimated using a constrained maximum likelihood approach, and its statistical properties are validated through a comprehensive Monte Carlo study. The CI-based choice model is empirically advantageous as it captures interaction effects while maintaining monotonicity. It also provides information on the complementarity between pairs of attributes coupled with their importance ranking as a by-product of the estimation. These insights could potentially assist policymakers in making policies to improve the preference level for an alternative. These advantages of the MNP-CI model with attribute cut-offs are illustrated in an empirical application to understand New Yorkers' preferences towards mobility-on-demand services.

Conferences are deeply connected to research fields, in this case bibliometrics. As such, they are a venue to present and discuss current and innovative research, and play an important role for the scholarly community. In this article, we provide an overview on the history of conferences in bibliometrics. We conduct an analysis to list the most prominent conferences that were announced in the newsletter by ISSI, the International Society for Scientometrics and Informetrics. Furthermore, we describe how conferences are connected to learned societies and journals. Finally, we provide an outlook on how conferences might change in future.

Space-air-ground integrated networks (SAGINs) will play a key role in 6G communication systems. They are considered a promising technology to enhance the network capacity in highly dense agglomerations and to provide connectivity in rural areas. The multi-layer and heterogeneous nature of SAGINs necessitates an innovative design of their multi-tier associations. We propose a modeling of the SAGINs association problem using multi-sided matching theory. Our aim is to provide a reliable, asynchronous and fully distributed approach that associates nodes across the layers so that the total end-to-end rate of the assigned agents is maximized. To this end, our problem is modeled as a multi-sided many-to-one matching game. A randomized matching algorithm with low information exchange is proposed. The algorithm is shown to reach an efficient and stable association between nodes in adjacent layers. Our simulation results show that the proposed approach achieves significant gain compared to the greedy and distance-based algorithms.

The problem of Approximate Nearest Neighbor (ANN) search is fundamental in computer science and has benefited from significant progress in the past couple of decades. However, most work has been devoted to pointsets whereas complex shapes have not been sufficiently treated. Here, we focus on distance functions between discretized curves in Euclidean space: they appear in a wide range of applications, from road segments to time-series in general dimension. For $\ell_p$-products of Euclidean metrics, for any $p$, we design simple and efficient data structures for ANN, based on randomized projections, which are of independent interest. They serve to solve proximity problems under a notion of distance between discretized curves, which generalizes both discrete Fr\'echet and Dynamic Time Warping distances. These are the most popular and practical approaches to comparing such curves. We offer the first data structures and query algorithms for ANN with arbitrarily good approximation factor, at the expense of increasing space usage and preprocessing time over existing methods. Query time complexity is comparable or significantly improved by our algorithms, our algorithm is especially efficient when the length of the curves is bounded.

Intersection over Union (IoU) is the most popular evaluation metric used in the object detection benchmarks. However, there is a gap between optimizing the commonly used distance losses for regressing the parameters of a bounding box and maximizing this metric value. The optimal objective for a metric is the metric itself. In the case of axis-aligned 2D bounding boxes, it can be shown that $IoU$ can be directly used as a regression loss. However, $IoU$ has a plateau making it infeasible to optimize in the case of non-overlapping bounding boxes. In this paper, we address the weaknesses of $IoU$ by introducing a generalized version as both a new loss and a new metric. By incorporating this generalized $IoU$ ($GIoU$) as a loss into the state-of-the art object detection frameworks, we show a consistent improvement on their performance using both the standard, $IoU$ based, and new, $GIoU$ based, performance measures on popular object detection benchmarks such as PASCAL VOC and MS COCO.

This paper considers the integrated problem of quay crane assignment, quay crane scheduling, yard location assignment, and vehicle dispatching operations at a container terminal. The main objective is to minimize vessel turnover times and maximize the terminal throughput, which are key economic drivers in terminal operations. Due to their computational complexities, these problems are not optimized jointly in existing work. This paper revisits this limitation and proposes Mixed Integer Programming (MIP) and Constraint Programming (CP) models for the integrated problem, under some realistic assumptions. Experimental results show that the MIP formulation can only solve small instances, while the CP model finds optimal solutions in reasonable times for realistic instances derived from actual container terminal operations.

北京阿比特科技有限公司