亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Non-linear polynomial systems over finite fields are used to model functional behavior of cryptosystems, with applications in system security, computer cryptography, and post-quantum cryptography. Solving polynomial systems is also one of the most difficult problems in mathematics. In this paper, we propose an automated reasoning procedure for deciding the satisfiability of a system of non-linear equations over finite fields. We introduce zero decomposition techniques to prove that polynomial constraints over finite fields yield finite basis explanation functions. We use these explanation functions in model constructing satisfiability solving, allowing us to equip a CDCL-style search procedure with tailored theory reasoning in SMT solving over finite fields. We implemented our approach and provide a novel and effective reasoning prototype for non-linear arithmetic over finite fields.

相關內容

Almost all problems in applied mathematics, including the analysis of dynamical systems, deal with spaces of real-valued functions on Euclidean domains in their formulation and solution. In this paper, we describe the the tool Ariadne, which provides a rigorous calculus for working with Euclidean functions. We first introduce the Ariadne framework, which is based on a clean separation of objects as providing exact, effective, validated and approximate information. We then discuss the function calculus as implemented in \Ariadne, including polynomial function models which are the fundamental class for concrete computations. We then consider solution of some core problems of functional analysis, namely solution of algebraic equations and differential equations, and briefly discuss their use for the analysis of hybrid systems. We will give examples of C++ and Python code for performing the various calculations. Finally, we will discuss progress on extensions, including improvements to the function calculus and extensions to more complicated classes of system.

Understanding the relationship between the composition of a research team and the potential impact of their research papers is crucial as it can steer the development of new science policies for improving the research enterprise. Numerous studies assess how the characteristics and diversity of research teams can influence their performance across several dimensions: ethnicity, internationality, size, and others. In this paper, we explore the impact of diversity in terms of the authors' expertise. To this purpose, we retrieved 114K papers in the field of Computer Science and analysed how the diversity of research fields within a research team relates to the number of citations their papers received in the upcoming 5 years. The results show that two different metrics we defined, reflecting the diversity of expertise, are significantly associated with the number of citations. This suggests that, at least in Computer Science, diversity of expertise is key to scientific impact.

Planar functions, introduced by Dembowski and Ostrom, are functions from a finite field to itself that give rise to finite projective planes. They exist, however, only for finite fields of odd characteristics. They have attracted much attention in the last decade thanks to their interest in theory and those deep and various applications in many fields. This paper focuses on planar trinomials over cubic and quartic extensions of finite fields. Our achievements are obtained using connections with quadratic forms and classical algebraic tools over finite fields. Furthermore, given the generality of our approach, the methodology presented could be employed to drive more planar functions on some finite extension fields.

We study cut finite element discretizations of a Darcy interface problem based on the mixed finite element pairs $\textbf{RT}_0\times Q_0$, $\textbf{BDM}_1\times Q_0$, and $\textbf{RT}_1\times Q_1$. Here $Q_k$ is the space of discontinuous polynomial functions of degree k, $\textbf{RT}_{k}$ is the Raviart-Thomas space, and $\textbf{BDM}_k$ is the Brezzi-Douglas-Marini space. We show that the standard ghost penalty stabilization, often added in the weak forms of cut finite element methods for stability and control of the condition number of the resulting linear system matrix, destroys the divergence-free property of the considered element pairs. Therefore, we propose two corrections to the standard stabilization strategy: using macro-elements and new stabilization terms for the pressure. By decomposing the computational mesh into macro-elements and applying ghost penalty terms only on interior edges of macro-elements, stabilization is active only where needed. By modifying the standard stabilization terms for the pressure we recover the optimal approximation of the divergence without losing control of the condition number of the linear system matrix. We derive a priori error estimates for the proposed unfitted finite element discretization based on $\textbf{RT}_k\times Q_k$, $k\geq 0$. Numerical experiments indicate that with the new method we have 1) optimal rates of convergence of the approximate velocity and pressure; 2) well-posed linear systems where the condition number of the system matrix scales as it does for fitted finite element discretizations; 3) optimal rates of convergence of the approximate divergence with pointwise divergence-free approximations of solenoidal velocity fields. All three properties hold independently of how the interface is positioned relative to the computational mesh.

We derive upper bounds on the Wasserstein distance ($W_1$), with respect to $\sup$-norm, between any continuous $\mathbb{R}^d$ valued random field indexed by the $n$-sphere and the Gaussian, based on Stein's method. We develop a novel Gaussian smoothing technique that allows us to transfer a bound in a smoother metric to the $W_1$ distance. The smoothing is based on covariance functions constructed using powers of Laplacian operators, designed so that the associated Gaussian process has a tractable Cameron-Martin or Reproducing Kernel Hilbert Space. This feature enables us to move beyond one dimensional interval-based index sets that were previously considered in the literature. Specializing our general result, we obtain the first bounds on the Gaussian random field approximation of wide random neural networks of any depth and Lipschitz activation functions at the random field level. Our bounds are explicitly expressed in terms of the widths of the network and moments of the random weights. We also obtain tighter bounds when the activation function has three bounded derivatives.

Pseudo-geometric designs are combinatorial designs which share the same parameters as a finite geometry design, but which are not isomorphic to that design. As far as we know, many pseudo-geometric designs have been constructed by the methods of finite geometries and combinatorics. However, none of pseudo-geometric designs with the parameters $S\left (2, q+1,(q^n-1)/(q-1)\right )$ is constructed by the approach of coding theory. In this paper, we use cyclic codes to construct pseudo-geometric designs. We firstly present a family of ternary cyclic codes from the $m$-sequences with Welch-type decimation $d=2\cdot 3^{(n-1)/2}+1$, and obtain some infinite family of 2-designs and a family of Steiner systems $S\left (2, 4, (3^n-1)/2\right )$ using these cyclic codes and their duals. Moreover, the parameters of these cyclic codes and their shortened codes are also determined. Some of those ternary codes are optimal or almost optimal. Finally, we show that one of these obtained Steiner systems is inequivalent to the point-line design of the projective space $\mathrm{PG}(n-1,3)$ and thus is a pseudo-geometric design.

In this paper we study a non-local Cahn-Hilliard equation with singular single-well potential and degenerate mobility. This results as a particular case of a more general model derived for a binary, saturated, closed and incompressible mixture, composed by a tumor phase and a healthy phase, evolving in a bounded domain. The general system couples a Darcy-type evolution for the average velocity field with a convective reaction-diffusion type evolution for the nutrient concentration and a non-local convective Cahn-Hilliard equation for the tumor phase. The main mathematical difficulties are related to the proof of the separation property for the tumor phase in the Cahn-Hilliard equation: up to our knowledge, such problem is indeed open in the literature. For this reason, in the present contribution we restrict the analytical study to the Cahn-Hilliard equation only. For the non-local Cahn- Hilliard equation with singular single-well potential and degenerate mobility, we study the existence and uniqueness of weak solutions for spatial dimensions $d\leq 3$. After showing existence, we prove the strict separation property in three spatial dimensions, implying the same property also for lower spatial dimensions, which opens the way to the proof of uniqueness of solutions. Finally, we propose a well posed and gradient stable continuous finite element approximation of the model for $d\leq 3$, which preserves the physical properties of the continuos solution and which is computationally efficient, and we show simulation results in two spatial dimensions which prove the consistency of the proposed scheme and which describe the phase ordering dynamics associated to the system.

Explainable Artificial Intelligence (XAI) is transforming the field of Artificial Intelligence (AI) by enhancing the trust of end-users in machines. As the number of connected devices keeps on growing, the Internet of Things (IoT) market needs to be trustworthy for the end-users. However, existing literature still lacks a systematic and comprehensive survey work on the use of XAI for IoT. To bridge this lacking, in this paper, we address the XAI frameworks with a focus on their characteristics and support for IoT. We illustrate the widely-used XAI services for IoT applications, such as security enhancement, Internet of Medical Things (IoMT), Industrial IoT (IIoT), and Internet of City Things (IoCT). We also suggest the implementation choice of XAI models over IoT systems in these applications with appropriate examples and summarize the key inferences for future works. Moreover, we present the cutting-edge development in edge XAI structures and the support of sixth-generation (6G) communication services for IoT applications, along with key inferences. In a nutshell, this paper constitutes the first holistic compilation on the development of XAI-based frameworks tailored for the demands of future IoT use cases.

Autonomous driving has achieved a significant milestone in research and development over the last decade. There is increasing interest in the field as the deployment of self-operating vehicles on roads promises safer and more ecologically friendly transportation systems. With the rise of computationally powerful artificial intelligence (AI) techniques, autonomous vehicles can sense their environment with high precision, make safe real-time decisions, and operate more reliably without human interventions. However, intelligent decision-making in autonomous cars is not generally understandable by humans in the current state of the art, and such deficiency hinders this technology from being socially acceptable. Hence, aside from making safe real-time decisions, the AI systems of autonomous vehicles also need to explain how these decisions are constructed in order to be regulatory compliant across many jurisdictions. Our study sheds a comprehensive light on developing explainable artificial intelligence (XAI) approaches for autonomous vehicles. In particular, we make the following contributions. First, we provide a thorough overview of the present gaps with respect to explanations in the state-of-the-art autonomous vehicle industry. We then show the taxonomy of explanations and explanation receivers in this field. Thirdly, we propose a framework for an architecture of end-to-end autonomous driving systems and justify the role of XAI in both debugging and regulating such systems. Finally, as future research directions, we provide a field guide on XAI approaches for autonomous driving that can improve operational safety and transparency towards achieving public approval by regulators, manufacturers, and all engaged stakeholders.

Reasoning with knowledge expressed in natural language and Knowledge Bases (KBs) is a major challenge for Artificial Intelligence, with applications in machine reading, dialogue, and question answering. General neural architectures that jointly learn representations and transformations of text are very data-inefficient, and it is hard to analyse their reasoning process. These issues are addressed by end-to-end differentiable reasoning systems such as Neural Theorem Provers (NTPs), although they can only be used with small-scale symbolic KBs. In this paper we first propose Greedy NTPs (GNTPs), an extension to NTPs addressing their complexity and scalability limitations, thus making them applicable to real-world datasets. This result is achieved by dynamically constructing the computation graph of NTPs and including only the most promising proof paths during inference, thus obtaining orders of magnitude more efficient models. Then, we propose a novel approach for jointly reasoning over KBs and textual mentions, by embedding logic facts and natural language sentences in a shared embedding space. We show that GNTPs perform on par with NTPs at a fraction of their cost while achieving competitive link prediction results on large datasets, providing explanations for predictions, and inducing interpretable models. Source code, datasets, and supplementary material are available online at //github.com/uclnlp/gntp.

北京阿比特科技有限公司