亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this short note, we discuss the circumstances that can lead to a failure to observe design order of discretization error convergence in accuracy verification when solving a time-dependent problem. Intuitively, one would expect to observe a design spatial order of accuracy when the discretization error is measured on a series of consistently refined grids after one extremely small time step because the time integration is then nearly exact. However, in reality, one observes one-order lower discretization error convergence than the design order. This loss of accuracy is not necessarily resolved even if the time step is consistently reduced along with the grid refinement. This can cause a serious problem because then one would wind up trying to find a coding error that does not exist. This short note clarifies the mechanism causing this failure to observe a design order of discretization error convergence in accuracy verification when solving time-dependent problems, and provides a guide for avoiding such pitfalls.

相關內容

機器學習系統設計系統評估標準

Scientific and engineering problems often involve parametric partial differential equations (PDEs), such as uncertainty quantification, optimizations, and inverse problems. However, solving these PDEs repeatedly can be prohibitively expensive, especially for large-scale complex applications. To address this issue, reduced order modeling (ROM) has emerged as an effective method to reduce computational costs. However, ROM often requires significant modifications to the existing code, which can be time-consuming and complex, particularly for large-scale legacy codes. Non-intrusive methods have gained attention as an alternative approach. However, most existing non-intrusive approaches are purely data-driven and may not respect the underlying physics laws during the online stage, resulting in less accurate approximations of the reduced solution. In this study, we propose a new non-intrusive bi-fidelity reduced basis method for time-independent parametric PDEs. Our algorithm utilizes the discrete operator, solutions, and right-hand sides obtained from the high-fidelity legacy solver. By leveraging a low-fidelity model, we efficiently construct the reduced operator and right-hand side for new parameter values during the online stage. Unlike other non-intrusive ROM methods, we enforce the reduced equation during the online stage. In addition, the non-intrusive nature of our algorithm makes it straightforward and applicable to general nonlinear time-independent problems. We demonstrate its performance through several benchmark examples, including nonlinear and multiscale PDEs.

In this paper, we devise a scheme for kernelizing, in sublinear space and polynomial time, various problems on planar graphs. The scheme exploits planarity to ensure that the resulting algorithms run in polynomial time and use O((sqrt(n) + k) log n) bits of space, where n is the number of vertices in the input instance and k is the intended solution size. As examples, we apply the scheme to Dominating Set and Vertex Cover. For Dominating Set, we also show that a well-known kernelization algorithm due to Alber et al. (JACM 2004) can be carried out in polynomial time and space O(k log n). Along the way, we devise restricted-memory procedures for computing region decompositions and approximating the aforementioned problems, which might be of independent interest.

We investigate online classification with paid stochastic experts. Here, before making their prediction, each expert must be paid. The amount that we pay each expert directly influences the accuracy of their prediction through some unknown Lipschitz "productivity" function. In each round, the learner must decide how much to pay each expert and then make a prediction. They incur a cost equal to a weighted sum of the prediction error and upfront payments for all experts. We introduce an online learning algorithm whose total cost after $T$ rounds exceeds that of a predictor which knows the productivity of all experts in advance by at most $\mathcal{O}(K^2(\log T)\sqrt{T})$ where $K$ is the number of experts. In order to achieve this result, we combine Lipschitz bandits and online classification with surrogate losses. These tools allow us to improve upon the bound of order $T^{2/3}$ one would obtain in the standard Lipschitz bandit setting. Our algorithm is empirically evaluated on synthetic data

Smart contracts play a vital role in the Ethereum ecosystem. Due to the prevalence of kinds of security issues in smart contracts, the smart contract verification is urgently needed, which is the process of matching a smart contract's source code to its on-chain bytecode for gaining mutual trust between smart contract developers and users. Although smart contract verification services are embedded in both popular Ethereum browsers (e.g., Etherscan and Blockscout) and official platforms (i.e., Sourcify), and gain great popularity in the ecosystem, their security and trustworthiness remain unclear. To fill the void, we present the first comprehensive security analysis of smart contract verification services in the wild. By diving into the detailed workflow of existing verifiers, we have summarized the key security properties that should be met, and observed eight types of vulnerabilities that can break the verification. Further, we propose a series of detection and exploitation methods to reveal the presence of vulnerabilities in the most popular services, and uncover 19 exploitable vulnerabilities in total. All the studied smart contract verification services can be abused to help spread malicious smart contracts, and we have already observed the presence of using this kind of tricks for scamming by attackers. It is hence urgent for our community to take actions to detect and mitigate security issues related to smart contract verification, a key component of the Ethereum smart contract ecosystem.

The volume function V(t) of a compact set S\in R^d is just the Lebesgue measure of the set of points within a distance to S not larger than t. According to some classical results in geometric measure theory, the volume function turns out to be a polynomial, at least in a finite interval, under a quite intuitive, easy to interpret, sufficient condition (called ``positive reach'') which can be seen as an extension of the notion of convexity. However, many other simple sets, not fulfilling the positive reach condition, have also a polynomial volume function. To our knowledge, there is no general, simple geometric description of such sets. Still, the polynomial character of $V(t)$ has some relevant consequences since the polynomial coefficients carry some useful geometric information. In particular, the constant term is the volume of S and the first order coefficient is the boundary measure (in Minkowski's sense). This paper is focused on sets whose volume function is polynomial on some interval starting at zero, whose length (that we call ``polynomial reach'') might be unknown. Our main goal is to approximate such polynomial reach by statistical means, using only a large enough random sample of points inside S. The practical motivation is simple: when the value of the polynomial reach , or rather a lower bound for it, is approximately known, the polynomial coefficients can be estimated from the sample points by using standard methods in polynomial approximation. As a result, we get a quite general method to estimate the volume and boundary measure of the set, relying only on an inner sample of points and not requiring the use any smoothing parameter. This paper explores the theoretical and practical aspects of this idea.

We propose an accurate and energy-stable parametric finite element method for solving the sharp-interface continuum model of solid-state dewetting in three-dimensional space. The model describes the motion of the film\slash vapor interface with contact line migration and is governed by the surface diffusion equation with proper boundary conditions at the contact line. We present a new weak formulation for the problem, in which the interface and its contact line are evolved simultaneously. By using piecewise linear elements in space and backward Euler in time, we then discretize the weak formulation to obtain a fully discretized parametric finite element approximation. The resulting numerical method is shown to be well-posed and unconditionally energy-stable. Furthermore, the numerical method is extended for solving the sharp interface model of solid-state dewetting with anisotropic surface energies in the Riemmanian metric form. Numerical results are reported to show the convergence and efficiency of the proposed numerical method as well as the anisotropic effects on the morphological evolution of thin films in solid-state dewetting.

The purpose of this work is to study an optimal control problem for a semilinear elliptic partial differential equation with a linear combination of Dirac measures as a forcing term; the control variable corresponds to the amplitude of such singular sources. We analyze the existence of optimal solutions and derive first and, necessary and sufficient, second order optimality conditions. We develop a solution technique that discretizes the state and adjoint equations with continuous piecewise linear finite elements; the control variable is already discrete. We analyze the convergence properties of discretizations and obtain, in two dimensions, an a priori error estimate for the underlying approximation of an optimal control variable.

Performance complementarity of solvers available to tackle black-box optimization problems gives rise to the important task of algorithm selection (AS). Automated AS approaches can help replace tedious and labor-intensive manual selection, and have already shown promising performance in various optimization domains. Automated AS relies on machine learning (ML) techniques to recommend the best algorithm given the information about the problem instance. Unfortunately, there are no clear guidelines for choosing the most appropriate one from a variety of ML techniques. Tree-based models such as Random Forest or XGBoost have consistently demonstrated outstanding performance for automated AS. Transformers and other tabular deep learning models have also been increasingly applied in this context. We investigate in this work the impact of the choice of the ML technique on AS performance. We compare four ML models on the task of predicting the best solver for the BBOB problems for 7 different runtime budgets in 2 dimensions. While our results confirm that a per-instance AS has indeed impressive potential, we also show that the particular choice of the ML technique is of much minor importance.

The generalization mystery in deep learning is the following: Why do over-parameterized neural networks trained with gradient descent (GD) generalize well on real datasets even though they are capable of fitting random datasets of comparable size? Furthermore, from among all solutions that fit the training data, how does GD find one that generalizes well (when such a well-generalizing solution exists)? We argue that the answer to both questions lies in the interaction of the gradients of different examples during training. Intuitively, if the per-example gradients are well-aligned, that is, if they are coherent, then one may expect GD to be (algorithmically) stable, and hence generalize well. We formalize this argument with an easy to compute and interpretable metric for coherence, and show that the metric takes on very different values on real and random datasets for several common vision networks. The theory also explains a number of other phenomena in deep learning, such as why some examples are reliably learned earlier than others, why early stopping works, and why it is possible to learn from noisy labels. Moreover, since the theory provides a causal explanation of how GD finds a well-generalizing solution when one exists, it motivates a class of simple modifications to GD that attenuate memorization and improve generalization. Generalization in deep learning is an extremely broad phenomenon, and therefore, it requires an equally general explanation. We conclude with a survey of alternative lines of attack on this problem, and argue that the proposed approach is the most viable one on this basis.

In recent years, there has been an exponential growth in the number of complex documents and texts that require a deeper understanding of machine learning methods to be able to accurately classify texts in many applications. Many machine learning approaches have achieved surpassing results in natural language processing. The success of these learning algorithms relies on their capacity to understand complex models and non-linear relationships within data. However, finding suitable structures, architectures, and techniques for text classification is a challenge for researchers. In this paper, a brief overview of text classification algorithms is discussed. This overview covers different text feature extractions, dimensionality reduction methods, existing algorithms and techniques, and evaluations methods. Finally, the limitations of each technique and their application in the real-world problem are discussed.

北京阿比特科技有限公司