亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we propose a computational interpretation of the generalized Kreisel-Putnam rule, also known as the generalized Harrop rule or simply the Split rule, in the style of BHK semantics. We will achieve this by exploiting the Curry-Howard correspondence between formulas and types. First, we inspect the inferential behavior of the Split rule in the setting of a natural deduction system for the intuitionistic propositional logic. This will guide our process of formulating an appropriate program that would capture the corresponding computational content of the typed Split rule. In other words, we want to find an appropriate selector function for the Split rule by considering its typed variant. Our investigation can also be reframed as an effort to answer the following questions: is the Split rule constructively valid in the sense of BHK semantics? Our answer is positive for the Split rule as well as for its newly proposed generalized version called the S rule.

相關內容

This paper analyzes the stability of the class of Time-Accurate and Highly-Stable Explicit Runge-Kutta (TASE-RK) methods, introduced in 2021 by Bassenne et al. (J. Comput. Phys.) for the numerical solution of stiff Initial Value Problems (IVPs). Such numerical methods are easy to implement and require the solution of a limited number of linear systems per step, whose coefficient matrices involve the exact Jacobian $J$ of the problem. To significantly reduce the computational cost of TASE-RK methods without altering their consistency properties, it is possible to replace $J$ with a matrix $A$ (not necessarily tied to $J$) in their formulation, for instance fixed for a certain number of consecutive steps or even constant. However, the stability properties of TASE-RK methods strongly depend on this choice, and so far have been studied assuming $A=J$. In this manuscript, we theoretically investigate the conditional and unconditional stability of TASE-RK methods by considering arbitrary $A$. To this end, we first split the Jacobian as $J=A+B$. Then, through the use of stability diagrams and their connections with the field of values, we analyze both the case in which $A$ and $B$ are simultaneously diagonalizable and not. Numerical experiments, conducted on Partial Differential Equations (PDEs) arising from applications, show the correctness and utility of the theoretical results derived in the paper, as well as the good stability and efficiency of TASE-RK methods when $A$ is suitably chosen.

In this paper, we tackle structure learning of Directed Acyclic Graphs (DAGs), with the idea of exploiting available prior knowledge of the domain at hand to guide the search of the best structure. In particular, we assume to know the topological ordering of variables in addition to the given data. We study a new algorithm for learning the structure of DAGs, proving its theoretical consistence in the limit of infinite observations. Furthermore, we experimentally compare the proposed algorithm to a number of popular competitors, in order to study its behavior in finite samples.

Graph neural networks are increasingly becoming the framework of choice for graph-based machine learning. In this paper, we propose a new graph neural network architecture that substitutes classical message passing with an analysis of the local distribution of node features. To this end, we extract the distribution of features in the egonet for each local neighbourhood and compare them against a set of learned label distributions by taking the histogram intersection kernel. The similarity information is then propagated to other nodes in the network, effectively creating a message passing-like mechanism where the message is determined by the ensemble of the features. We perform an ablation study to evaluate the network's performance under different choices of its hyper-parameters. Finally, we test our model on standard graph classification and regression benchmarks, and we find that it outperforms widely used alternative approaches, including both graph kernels and graph neural networks.

In this paper, we further investigate the problem of selecting a set of design points for universal kriging, which is a widely used technique for spatial data analysis. Our goal is to select the design points in order to make simultaneous predictions of the random variable of interest at a finite number of unsampled locations with maximum precision. Specifically, we consider as response a correlated random field given by a linear model with an unknown parameter vector and a spatial error correlation structure. We propose a new design criterion that aims at simultaneously minimizing the variation of the prediction errors at various points. We also present various efficient techniques for incrementally building designs for that criterion scaling well for high dimensions. Thus the method is particularly suitable for big data applications in areas of spatial data analysis such as mining, hydrogeology, natural resource monitoring, and environmental sciences or equivalently for any computer simulation experiments. We have demonstrated the effectiveness of the proposed designs through two illustrative examples: one by simulation and another based on real data from Upper Austria.

In this paper we consider the numerical solution of fractional terminal value problems (FDE-TVPs). In particular, the proposed procedure uses a Newton-type iteration which is particularly efficient when coupled with a recently-introduced step-by-step procedure for solving fractional initial value problems (FDE-IVPs), able to produce spectrally accurate solutions of FDE problems. Some numerical tests are reported to make evidence of its effectiveness.

In this paper, we present an LLM-based code translation method and an associated tool called CoTran, that translates whole-programs from one high-level programming language to another. Current LLM-based code translation methods lack a training approach to ensure that the translated code reliably compiles or bears substantial functional equivalence to the input code. In our work, we train an LLM via reinforcement learning, by modifying the fine-tuning process to incorporate compiler feedback and symbolic execution (symexec)-based equivalence testing feedback that checks for functional equivalence between the input and output programs. The idea is to guide an LLM-in-training, via compiler and symexec-based testing feedback, by letting it know how far it is from producing perfect translations. We report on extensive experiments comparing CoTran with 14 other code translation tools that include human-written transpilers, LLM-based translation tools, and ChatGPT over a benchmark of more than 57,000 Java-Python equivalent pairs, and we show that CoTran outperforms them on relevant metrics such as compilation accuracy (CompAcc) and functional equivalence accuracy (FEqAcc). For example, our tool achieves 48.68% FEqAcc, 76.98% CompAcc for Python-to-Java translation, whereas the nearest competing tool (PLBART-base) only gets 38.26% and 75.77% resp. Also, built upon CodeT5, CoTran achieves +11.23%, +14.89% improvement on FEqAcc and +4.07%, +8.14% on CompAcc for Java-to-Python and Python-to-Java translation resp.

We study the complexity of the following related computational tasks concerning a fixed countable graph G: 1. Does a countable graph H provided as input have a(n induced) subgraph isomorphic to G? 2. Given a countable graph H that has a(n induced) subgraph isomorphic to G, find such a subgraph. The framework for our investigations is given by effective Wadge reducibility and by Weihrauch reducibility. Our work follows on "Reverse mathematics and Weihrauch analysis motivated by finite complexity theory" (Computability, 2021) by BeMent, Hirst and Wallace, and we answer several of their open questions.

In this paper, we propose a non-parametric score to evaluate the quality of the solution to an iterative algorithm for Independent Component Analysis (ICA) with arbitrary Gaussian noise. The novelty of this score stems from the fact that it just assumes a finite second moment of the data and uses the characteristic function to evaluate the quality of the estimated mixing matrix without any knowledge of the parameters of the noise distribution. We also provide a new characteristic function-based contrast function for ICA and propose a fixed point iteration to optimize the corresponding objective function. Finally, we propose a theoretical framework to obtain sufficient conditions for the local and global optima of a family of contrast functions for ICA. This framework uses quasi-orthogonalization inherently, and our results extend the classical analysis of cumulant-based objective functions to noisy ICA. We demonstrate the efficacy of our algorithms via experimental results on simulated datasets.

BERT, a pre-trained Transformer model, has achieved ground-breaking performance on multiple NLP tasks. In this paper, we describe BERTSUM, a simple variant of BERT, for extractive summarization. Our system is the state of the art on the CNN/Dailymail dataset, outperforming the previous best-performed system by 1.65 on ROUGE-L. The codes to reproduce our results are available at //github.com/nlpyang/BertSum

Recently, ensemble has been applied to deep metric learning to yield state-of-the-art results. Deep metric learning aims to learn deep neural networks for feature embeddings, distances of which satisfy given constraint. In deep metric learning, ensemble takes average of distances learned by multiple learners. As one important aspect of ensemble, the learners should be diverse in their feature embeddings. To this end, we propose an attention-based ensemble, which uses multiple attention masks, so that each learner can attend to different parts of the object. We also propose a divergence loss, which encourages diversity among the learners. The proposed method is applied to the standard benchmarks of deep metric learning and experimental results show that it outperforms the state-of-the-art methods by a significant margin on image retrieval tasks.

北京阿比特科技有限公司