亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Working in a variant of the intersection type assignment system of Coppo, Dezani-Ciancaglini and Veneri [1981], we prove several facts about sets of terms having a given intersection type. One of our results is that every strongly normalizing term M admits a *uniqueness typing*, which is a pair $(\Gamma,A)$ such that 1) $\Gamma \vdash M : A$ 2) $\Gamma \vdash N : A \Longrightarrow M =_{\beta\eta} N$ We also discuss several presentations of intersection type algebras, and the corresponding choices of type assignment rules. In the second part of the paper, we prove that the set of closed terms having a given intersection type is separable, and, if infinite, forms an adequate numeral system.

相關內容

In classic auction theory, reserve prices are known to be effective for improving revenue for the auctioneer against quasi-linear utility maximizing bidders. The introduction of reserve prices, however, usually do not help improve total welfare of the auctioneer and the bidders. In this paper, we focus on value maximizing bidders with return on spend constraints -- a paradigm that has drawn considerable attention recently as more advertisers adopt auto-bidding algorithms in advertising platforms -- and show that the introduction of reserve prices has a novel impact on the market. Namely, by choosing reserve prices appropriately the auctioneer can improve not only the total revenue but also the total welfare. Our results also demonstrate that reserve prices are robust to bidder types, i.e., reserve prices work well for different bidder types, such as value maximizers and utility maximizers, without using bidder type information. We generalize these results for a variety of auction mechanisms such as VCG, GSP, and first-price auctions. Moreover, we show how to combine these results with additive boosts to improve the welfare of the outcomes of the auction further. Finally, we complement our theoretical observations with an empirical study confirming the effectiveness of these ideas using data from online advertising auctions.

Test smells are known as bad development practices that reflect poor design and implementation choices in software tests. Over the last decade, test smells were heavily studied to measure their prevalence and impacts on test maintainability. However, these studies focused mainly on the unit level and to this day, the work on system tests that interact with the System Under Test through a Graphical User Interface remains limited. To fill the gap, we conduct an exploratory analysis of test smells occurring in System User Interactive Tests (SUIT). First, based on a multi-vocal literature review, we propose a catalog of 35 SUIT-specific smells. Then, we conduct an empirical analysis to assess the prevalence and refactoring of these smells in 48 industrial test suites and 12 open-source projects. We show that the same type of smells tends to appear in industrial and open-source projects, but the symptoms are not addressed in the same way. Smells such as Obscure Test, Sneaky Checking, and Over Checking show symptoms in more than 70% of the tests. Yet refactoring actions are much less frequent with less than 50% of the affected tests ever undergoing refactoring. Interestingly, while refactoring actions are rare, some smells, such as Narcissistic, disappear through the removal of old symptomatic tests and the introduction of new tests not presenting such symptoms.

A flag of codes $C_0 \subsetneq C_1 \subsetneq \cdots \subsetneq C_s \subseteq {\mathbb F}_q^n$ is said to satisfy the {\it isometry-dual property} if there exists ${\bf x}\in (\mathbb{F}_q^*)^n$ such that the code $C_i$ is {\bf x}-isometric to the dual code $C_{s-i}^\perp$ for all $i=0,\ldots, s$. For $P$ and $Q$ rational places in a function field ${\mathcal F}$, we investigate the existence of isometry-dual flags of codes in the families of two-point algebraic geometry codes $$C_\mathcal L(D, a_0P+bQ)\subsetneq C_\mathcal L(D, a_1P+bQ)\subsetneq \dots \subsetneq C_\mathcal L(D, a_sP+bQ),$$ where the divisor $D$ is the sum of pairwise different rational places of ${\mathcal F}$ and $P, Q$ are not in $\mbox{supp}(D)$. We characterize those sequences in terms of $b$ for general function fields. We then apply the result to the broad class of Kummer extensions ${\mathcal F}$ defined by affine equations of the form $y^m=f(x)$, for $f(x)$ a separable polynomial of degree $r$, where $\mbox{gcd}(r, m)=1$. For $P$ the rational place at infinity and $Q$ the rational place associated to one of the roots of $f(x)$, it is shown that the flag of two-point algebraic geometry codes has the isometry-dual property if and only if $m$ divides $2b+1$. At the end we illustrate our results by applying them to two-point codes over several well know function fields.

We propose an approach to the estimation of infinite sets of random vectors. The problem addressed is as follows. Given two infinite sets of random vectors, find a single estimator that estimates vectors from with a controlled associated error. A new theory for the existence and implementation of such an estimator is studied. In particular, we show that the proposed estimator is asymptotically optimal. Moreover, the estimator is determined in terms of pseudo-inverse matrices and, therefore, it always exists.

A finite element analysis of a Dirichlet boundary control problem governed by the linear parabolic equation is presented in this article. The Dirichlet control is considered in a closed and convex subset of the energy space $H^1(\Omega \times(0,T)).$ We prove well-posedness and discuss some regularity results for the control problem. We derive the optimality system for the optimal control problem. The first order necessary optimality condition results in a simplified Signorini type problem for control variable. The space discretization of the state variable is done using conforming finite elements, whereas the time discretization is based on discontinuous Galerkin methods. To discretize the control we use the conforming prismatic Lagrange finite elements. We derive an optimal order of convergence of error in control, state, and adjoint state. The theoretical results are corroborated by some numerical tests.

Switched systems are known to exhibit subtle (in)stability behaviors requiring system designers to carefully analyze the stability of closed-loop systems that arise from their proposed switching control laws. This paper presents a formal approach for verifying switched system stability that blends classical ideas from the controls and verification literature using differential dynamic logic (dL), a logic for deductive verification of hybrid systems. From controls, we use standard stability notions for various classes of switching mechanisms and their corresponding Lyapunov function-based analysis techniques. From verification, we use dL's ability to verify quantified properties of hybrid systems and dL models of switched systems as looping hybrid programs whose stability can be formally specified and proven by finding appropriate loop invariants, i.e., properties that are preserved across each loop iteration. This blend of ideas enables a trustworthy implementation of switched system stability verification in the KeYmaera X prover based on dL. For standard classes of switching mechanisms, the implementation provides fully automated stability proofs, including searching for suitable Lyapunov functions. Moreover, the generality of the deductive approach also enables verification of switching control laws that require non-standard stability arguments through the design of loop invariants that suitably express specific intuitions behind those control laws. This flexibility is demonstrated on three case studies: a model for longitudinal flight control by Branicky, an automatic cruise controller, and Brockett's nonholonomic integrator.

We consider the single-item interdependent value setting, where there is a monopolist, $n$ buyers, and each buyer has a private signal $s_i$ describing a piece of information about the item. Each bidder $i$ also has a valuation function $v_i(s_1,\ldots,s_n)$ mapping the (private) signals of all buyers to a positive real number representing their value for the item. This setting captures scenarios where the item's information is asymmetric or dispersed among agents, such as in competitions for oil drilling rights, or in auctions for art pieces. Due to the increased complexity of this model compared to standard private values, it is generally assumed that each bidder's valuation function $v_i$ is public knowledge. But in many situations, the seller may not know how a bidder aggregates signals into a valuation. In this paper, we design mechanisms that guarantee approximately-optimal social welfare while satisfying ex-post incentive compatibility and individual rationality for the case where the valuation functions are private to the bidders. When the valuations are public, it is possible for optimal social welfare to be attained by a deterministic mechanism under a single-crossing condition. In contrast, when the valuations are the bidders' private information, we show that no finite bound can be achieved by any deterministic mechanism even under single-crossing. Moreover, no randomized mechanism can guarantee better than an $n$-approximation. We thus consider valuation functions that are submodular over signals (SOS), introduced in the context of combinatorial auctions in a recent breakthrough paper by Eden et al. [EC'19]. Our main result is an $O(\log^2 n)$-approximation for buyers with private signals and valuations under the SOS condition. We also give a tight $\Theta(k)$-approximation for the case each agent's valuation depends on at most $k$ other signals even for unknown $k$.

By establishing an interesting connection between ordinary Bell polynomials and rational convolution powers, some composition and inverse relations of Bell polynomials as well as explicit expressions for convolution roots of sequences are obtained. Based on these results, a new method is proposed for calculation of partial Bell polynomials based on prime factorization. It is shown that this method is more efficient than the conventional recurrence procedure for computing Bell polynomials in most cases, requiring far less arithmetic operations. A detailed analysis of the computation complexity is provided, followed by some numerical evaluations.

In this paper, from a theoretical perspective, we study how powerful graph neural networks (GNNs) can be for learning approximation algorithms for combinatorial problems. To this end, we first establish a new class of GNNs that can solve strictly a wider variety of problems than existing GNNs. Then, we bridge the gap between GNN theory and the theory of distributed local algorithms to theoretically demonstrate that the most powerful GNN can learn approximation algorithms for the minimum dominating set problem and the minimum vertex cover problem with some approximation ratios and that no GNN can perform better than with these ratios. This paper is the first to elucidate approximation ratios of GNNs for combinatorial problems. Furthermore, we prove that adding coloring or weak-coloring to each node feature improves these approximation ratios. This indicates that preprocessing and feature engineering theoretically strengthen model capabilities.

Intersection over Union (IoU) is the most popular evaluation metric used in the object detection benchmarks. However, there is a gap between optimizing the commonly used distance losses for regressing the parameters of a bounding box and maximizing this metric value. The optimal objective for a metric is the metric itself. In the case of axis-aligned 2D bounding boxes, it can be shown that $IoU$ can be directly used as a regression loss. However, $IoU$ has a plateau making it infeasible to optimize in the case of non-overlapping bounding boxes. In this paper, we address the weaknesses of $IoU$ by introducing a generalized version as both a new loss and a new metric. By incorporating this generalized $IoU$ ($GIoU$) as a loss into the state-of-the art object detection frameworks, we show a consistent improvement on their performance using both the standard, $IoU$ based, and new, $GIoU$ based, performance measures on popular object detection benchmarks such as PASCAL VOC and MS COCO.

北京阿比特科技有限公司