亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In the literature on Kleene algebra, a number of variants have been proposed which impose additional structure specified by a theory, such as Kleene algebra with tests (KAT) and the recent Kleene algebra with observations (KAO), or make specific assumptions about certain constants, as for instance in NetKAT. Many of these variants fit within the unifying perspective offered by Kleene algebra with hypotheses, which comes with a canonical language model constructed from a given set of hypotheses. For the case of KAT, this model corresponds to the familiar interpretation of expressions as languages of guarded strings. A relevant question therefore is whether Kleene algebra together with a given set of hypotheses is complete with respect to its canonical language model. In this paper, we revisit, combine and extend existing results on this question to obtain tools for proving completeness in a modular way. We showcase these tools by giving new and modular proofs of completeness for KAT, KAO and NetKAT, and we prove completeness for new variants of KAT: KAT extended with a constant for the full relation, KAT extended with a converse operation, and a version of KAT where the collection of tests only forms a distributive lattice.

相關內容

這個新版本的工具會議系列恢復了從1989年到2012年的50個會議的傳統。工具最初是“面向對象語言和系統的技術”,后來發展到包括軟件技術的所有創新方面。今天許多最重要的軟件概念都是在這里首次引入的。2019年TOOLS 50+1在俄羅斯喀山附近舉行,以同樣的創新精神、對所有與軟件相關的事物的熱情、科學穩健性和行業適用性的結合以及歡迎該領域所有趨勢和社區的開放態度,延續了該系列。 官網鏈接: · 講稿 · 符號學 ·
2024 年 6 月 24 日

We describe a new algorithm to compute Whitney stratifications of real and complex algebraic varieties. This algorithm is a modification of the algorithm of Helmer and Nanda (HN), but is made more efficient by using techniques for equidimensional decomposition rather than computing the set of associated primes of a polynomial ideal at a key step in the HN algorithm. We note that this modified algorithm may fail to produce a minimal Whitney stratification even when the HN algorithm would produce a minimal stratification. We, additionally, present an algorithm to coarsen any Whitney stratification of a complex variety to a minimal Whitney stratification; the theoretical basis for our approach is a classical result of Teissier.

We consider the parameter estimation problem in the deviated Gaussian mixture of experts in which the data are generated from $(1 - \lambda^{\ast}) g_0(Y| X)+ \lambda^{\ast} \sum_{i = 1}^{k_{\ast}} p_{i}^{\ast} f(Y|(a_{i}^{\ast})^{\top}X+b_i^{\ast},\sigma_{i}^{\ast})$, where $X, Y$ are respectively a covariate vector and a response variable, $g_{0}(Y|X)$ is a known function, $\lambda^{\ast} \in [0, 1]$ is true but unknown mixing proportion, and $(p_{i}^{\ast}, a_{i}^{\ast}, b_{i}^{\ast}, \sigma_{i}^{\ast})$ for $1 \leq i \leq k^{\ast}$ are unknown parameters of the Gaussian mixture of experts. This problem arises from the goodness-of-fit test when we would like to test whether the data are generated from $g_{0}(Y|X)$ (null hypothesis) or they are generated from the whole mixture (alternative hypothesis). Based on the algebraic structure of the expert functions and the distinguishability between $g_0$ and the mixture part, we construct novel Voronoi-based loss functions to capture the convergence rates of maximum likelihood estimation (MLE) for our models. We further demonstrate that our proposed loss functions characterize the local convergence rates of parameter estimation more accurately than the generalized Wasserstein, a loss function being commonly used for estimating parameters in the Gaussian mixture of experts.

Almost surely, the difference between the randomness deficiencies of two infinite sequences will be unbounded with respect to repeated iterations of the shift operator.

While the recent literature has seen a surge in the study of constrained bandit problems, all existing methods for these begin by assuming the feasibility of the underlying problem. We initiate the study of testing such feasibility assumptions, and in particular address the problem in the linear bandit setting, thus characterising the costs of feasibility testing for an unknown linear program using bandit feedback. Concretely, we test if $\exists x: Ax \ge 0$ for an unknown $A \in \mathbb{R}^{m \times d}$, by playing a sequence of actions $x_t\in \mathbb{R}^d$, and observing $Ax_t + \mathrm{noise}$ in response. By identifying the hypothesis as determining the sign of the value of a minimax game, we construct a novel test based on low-regret algorithms and a nonasymptotic law of iterated logarithms. We prove that this test is reliable, and adapts to the `signal level,' $\Gamma,$ of any instance, with mean sample costs scaling as $\widetilde{O}(d^2/\Gamma^2)$. We complement this by a minimax lower bound of $\Omega(d/\Gamma^2)$ for sample costs of reliable tests, dominating prior asymptotic lower bounds by capturing the dependence on $d$, and thus elucidating a basic insight missing in the extant literature on such problems.

This work presents a procedure to solve the Euler equations by explicitly updating, in a conservative manner, a generic thermodynamic variable such as temperature, pressure or entropy instead of the total energy. The presented procedure is valid for any equation of state and spatial discretization. When using complex equations of state such as Span-Wagner, choosing the temperature as the generic thermodynamic variable yields great reductions in the computational costs associated to thermodynamic evaluations. Results computed with a state of the art thermodynamic model are presented, and computational times are analyzed. Particular attention is dedicated to the conservation of total energy, the propagation speed of shock waves and jump conditions. The procedure is thoroughly tested using the Span-Wagner equation of state through the CoolProp thermodynamic library and the Van der Waals equation of state, both in the ideal and non-ideal compressible fluid-dynamics regimes, by comparing it to the standard total energy update and analytical solutions where available.

Legislative systems face growing complexity due to the ever-increasing number of laws and intricate interdependencies between them. Traditional methods of storing and analyzing legal systems, mainly based on RDF, struggle with this complexity, hindering efficient knowledge discovery, as required by domain experts. In this paper, we propose to model legislation into a property graph, where edges represent citations, modifications, and abrogations between laws and their articles or attachments, both represented as nodes and edges with properties. As a practical use case, we implement the model in the Italian legislative system. First, we describe our approach to extracting knowledge from legal texts. To this aim, we leverage the recently internationally adopted XML law standard, Akoma Ntoso, to parse and identify entities, relationships and properties. Next, we describe the model and the schema implemented using Neo4j, the market-leading graph database management system. The schema is designed to capture the structure and hierarchy of laws, together with their interdependencies. We show how such a property graph enables an efficient answer to complex and relevant queries previously impractical on raw text. By leveraging other implementations of the Akoma Ntoso standard and the proposed property graph approach, we are confident that this work will facilitate a comprehensive comparison of legislative systems and their complexities.

The quest for a formula that satisfactorily measures the effective degrees of freedom in kernel density estimation (KDE) is a long standing problem with few solutions. Starting from the orthogonal polynomial sequence (OPS) expansion for the ratio of the empirical to the oracle density, we show how convolution with the kernel leads to a new OPS with respect to which one may express the resulting KDE. The expansion coefficients of the two OPS systems can then be related via a kernel sensitivity matrix, and this then naturally leads to a definition of effective parameters by taking the trace of a symmetrized positive semi-definite normalized version. The resulting effective degrees of freedom (EDoF) formula is an oracle-based quantity; the first ever proposed in the literature. Asymptotic properties of the empirical EDoF are worked out through influence functions. Numerical investigations confirm the theoretical insights.

Reshaping, a point operation that alters the characteristics of signals, has been shown capable of improving the compression ratio in video coding practices. Out-of-loop reshaping that directly modifies the input video signal was first adopted as the supplemental enhancement information (SEI) for the HEVC/H.265 without the need to alter the core design of the video codec. VVC/H.266 further improves the coding efficiency by adopting in-loop reshaping that modifies the residual signal being processed in the hybrid coding loop. In this paper, we theoretically analyze the rate-distortion performance of the in-loop reshaping and use experiments to verify the theoretical result. We prove that the in-loop reshaping can improve coding efficiency when the entropy coder adopted in the coding pipeline is suboptimal, which is in line with the practical scenarios that video codecs operate in. We derive the PSNR gain in a closed form and show that the theoretically predicted gain is consistent with that measured from experiments using standard testing video sequences.

Many organizations use algorithms that have a disparate impact, i.e., the benefits or harms of the algorithm fall disproportionately on certain social groups. Addressing an algorithm's disparate impact can be challenging, especially because it is often unclear whether reducing this impact is possible without sacrificing other important objectives of the organization, such as accuracy or profit. Establishing the improvability of algorithms with respect to multiple criteria is of both conceptual and practical interest: in many settings, disparate impact that would otherwise be prohibited under US federal law is permissible if it is necessary to achieve a legitimate business interest. The question is how a policy-maker can formally substantiate, or refute, this necessity defense. In this paper, we provide an econometric framework for testing the hypothesis that it is possible to improve on the fairness of an algorithm without compromising on other pre-specified objectives. Our proposed test is simple to implement and can be applied under any exogenous constraint on the algorithm space. We establish the large-sample validity and consistency of our test, and illustrate its practical application by evaluating a healthcare algorithm originally considered by Obermeyer et al 2019. In this application, we reject the null hypothesis that it is not possible to reduce the algorithm's disparate impact without compromising on the accuracy of its predictions.

Segmentation models for brain lesions in MRI are commonly developed for a specific disease and trained on data with a predefined set of MRI modalities. Each such model cannot segment the disease using data with a different set of MRI modalities, nor can it segment any other type of disease. Moreover, this training paradigm does not allow a model to benefit from learning from heterogeneous databases that may contain scans and segmentation labels for different types of brain pathologies and diverse sets of MRI modalities. Is it feasible to use Federated Learning (FL) for training a single model on client databases that contain scans and labels of different brain pathologies and diverse sets of MRI modalities? We demonstrate promising results by combining appropriate, simple, and practical modifications to the model and training strategy: Designing a model with input channels that cover the whole set of modalities available across clients, training with random modality drop, and exploring the effects of feature normalization methods. Evaluation on 7 brain MRI databases with 5 different diseases shows that such FL framework can train a single model that is shown to be very promising in segmenting all disease types seen during training. Importantly, it is able to segment these diseases in new databases that contain sets of modalities different from those in training clients. These results demonstrate, for the first time, feasibility and effectiveness of using FL to train a single segmentation model on decentralised data with diverse brain diseases and MRI modalities, a necessary step towards leveraging heterogeneous real-world databases. Code will be made available at: //github.com/FelixWag/FL-MultiDisease-MRI

北京阿比特科技有限公司