亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We introduce a novel particle-in-Fourier (PIF) scheme that extends its applicability to non-periodic boundary conditions. Our method handles free space boundary conditions by replacing the Fourier Laplacian operator in PIF with a mollified Green's function as first introduced by Vico-Greengard-Ferrando. This modification yields highly accurate free space solutions to the Vlasov-Poisson system, while still maintaining energy conservation up to an error bounded by the time step size. We also explain how to extend our scheme to arbitrary Dirichlet boundary conditions via standard potential theory, which we illustrate in detail for Dirichlet boundary conditions on a circular boundary. We support our approach with proof-of-concept numerical results from two-dimensional plasma test cases to demonstrate the accuracy, efficiency, and conservation properties of the scheme.

相關內容

CASES:International Conference on Compilers, Architectures, and Synthesis for Embedded Systems。 Explanation:嵌入式系統編譯器、體系結構和綜合國際會議。 Publisher:ACM。 SIT:

Central Bank Digital Currency (CBDC) is a novel form of money that could be issued and regulated by central banks, offering benefits such as programmability, security, and privacy. However, the design of a CBDC system presents numerous technical and social challenges. This paper presents the design and prototype of a non-custodial wallet, a device that enables users to store and spend CBDC in various contexts. To address the challenges of designing a CBDC system, we conducted a series of workshops with internal and external stakeholders, using methods such as storytelling, metaphors, and provotypes to communicate CBDC concepts, elicit user feedback and critique, and incorporate normative values into the technical design. We derived basic guidelines for designing CBDC systems that balance technical and social aspects, and reflect user needs and values. Our paper contributes to the CBDC discourse by demonstrating a practical example of how CBDC could be used in everyday life and by highlighting the importance of a user-centred approach.

While analogies are a common way to evaluate word embeddings in NLP, it is also of interest to investigate whether or not analogical reasoning is a task in itself that can be learned. In this paper, we test several ways to learn basic analogical reasoning, specifically focusing on analogies that are more typical of what is used to evaluate analogical reasoning in humans than those in commonly used NLP benchmarks. Our experiments find that models are able to learn analogical reasoning, even with a small amount of data. We additionally compare our models to a dataset with a human baseline, and find that after training, models approach human performance.

We consider the problem of finite-time identification of linear dynamical systems from $T$ samples of a single trajectory. Recent results have predominantly focused on the setup where no structural assumption is made on the system matrix $A^* \in \mathbb{R}^{n \times n}$, and have consequently analyzed the ordinary least squares (OLS) estimator in detail. We assume prior structural information on $A^*$ is available, which can be captured in the form of a convex set $\mathcal{K}$ containing $A^*$. For the solution of the ensuing constrained least squares estimator, we derive non-asymptotic error bounds in the Frobenius norm that depend on the local size of $\mathcal{K}$ at $A^*$. To illustrate the usefulness of these results, we instantiate them for four examples, namely when (i) $A^*$ is sparse and $\mathcal{K}$ is a suitably scaled $\ell_1$ ball; (ii) $\mathcal{K}$ is a subspace; (iii) $\mathcal{K}$ consists of matrices each of which is formed by sampling a bivariate convex function on a uniform $n \times n$ grid (convex regression); (iv) $\mathcal{K}$ consists of matrices each row of which is formed by uniform sampling (with step size $1/T$) of a univariate Lipschitz function. In all these situations, we show that $A^*$ can be reliably estimated for values of $T$ much smaller than what is needed for the unconstrained setting.

Superconvergence of differential structure on discretized surfaces is studied in this paper. The newly introduced geometric supercloseness provides us with a fundamental tool to prove the superconvergence of gradient recovery on deviated surfaces. An algorithmic framework for gradient recovery without exact geometric information is introduced. Several numerical examples are documented to validate the theoretical results.

Efficiently enumerating all the extreme points of a polytope identified by a system of linear inequalities is a well-known challenge issue. We consider a special case and present an algorithm that enumerates all the extreme points of a bisubmodular polyhedron in $\mathcal{O}(n^4|V|)$ time and $\mathcal{O}(n^2)$ space complexity, where $n$ is the dimension of underlying space and $V$ is the set of outputs. We use the reverse search and signed poset linked to extreme points to avoid the redundant search. Our algorithm is a generalization of enumerating all the extreme points of a base polyhedron which comprises some combinatorial enumeration problems.

This article proposes a new two-parameter generalized entropy, which can be reduced to the Tsallis and the Shannon entropy for specific values of its parameters. We develop a number of information-theoretic properties of this generalized entropy and divergence, for instance, the sub-additive property, strong sub-additive property, joint convexity, and information monotonicity. This article presents an exposit investigation on the information-theoretic and information-geometric characteristics of the new generalized entropy and compare them with the properties of the Tsallis and the Shannon entropy.

In this work, we introduce a novel approach to regularization in multivariable regression problems. Our regularizer, called DLoss, penalises differences between the model's derivatives and derivatives of the data generating function as estimated from the training data. We call these estimated derivatives data derivatives. The goal of our method is to align the model to the data, not only in terms of target values but also in terms of the derivatives involved. To estimate data derivatives, we select (from the training data) 2-tuples of input-value pairs, using either nearest neighbour or random, selection. On synthetic and real datasets, we evaluate the effectiveness of adding DLoss, with different weights, to the standard mean squared error loss. The experimental results show that with DLoss (using nearest neighbour selection) we obtain, on average, the best rank with respect to MSE on validation data sets, compared to no regularization, L2 regularization, and Dropout.

Exploring the semantic context in scene images is essential for indoor scene recognition. However, due to the diverse intra-class spatial layouts and the coexisting inter-class objects, modeling contextual relationships to adapt various image characteristics is a great challenge. Existing contextual modeling methods for scene recognition exhibit two limitations: 1) They typically model only one kind of spatial relationship among objects within scenes in an artificially predefined manner, with limited exploration of diverse spatial layouts. 2) They often overlook the differences in coexisting objects across different scenes, suppressing scene recognition performance. To overcome these limitations, we propose SpaCoNet, which simultaneously models Spatial relation and Co-occurrence of objects guided by semantic segmentation. Firstly, the Semantic Spatial Relation Module (SSRM) is constructed to model scene spatial features. With the help of semantic segmentation, this module decouples the spatial information from the scene image and thoroughly explores all spatial relationships among objects in an end-to-end manner. Secondly, both spatial features from the SSRM and deep features from the Image Feature Extraction Module are allocated to each object, so as to distinguish the coexisting object across different scenes. Finally, utilizing the discriminative features above, we design a Global-Local Dependency Module to explore the long-range co-occurrence among objects, and further generate a semantic-guided feature representation for indoor scene recognition. Experimental results on three widely used scene datasets demonstrate the effectiveness and generality of the proposed method.

The paper explores the Biased Random-Key Genetic Algorithm (BRKGA) in the domain of logistics and vehicle routing. Specifically, the application of the algorithm is contextualized within the framework of the Vehicle Routing Problem with Occasional Drivers and Time Window (VRPODTW) that represents a critical challenge in contemporary delivery systems. Within this context, BRKGA emerges as an innovative solution approach to optimize routing plans, balancing cost-efficiency with operational constraints. This research introduces a new BRKGA, characterized by a variable mutant population which can vary from generation to generation, named BRKGA-VM. This novel variant was tested to solve a VRPODTW. For this purpose, an innovative specific decoder procedure was proposed and implemented. Furthermore, a hybridization of the algorithm with a Variable Neighborhood Descent (VND) algorithm has also been considered, showing an improvement of problem-solving capabilities. Computational results show a better performances in term of effectiveness over a previous version of BRKGA, denoted as MP. The improved performance of BRKGA-VM is evident from its ability to optimize solutions across a wide range of scenarios, with significant improvements observed for each type of instance considered. The analysis also reveals that VM achieves preset goals more quickly compared to MP, thanks to the increased variability induced in the mutant population which facilitates the exploration of new regions of the solution space. Furthermore, the integration of VND has shown an additional positive impact on the quality of the solutions found.

State transition algorithm (STA) is a metaheuristic method for global optimization. Recently, a modified STA named parameter optimal state transition algorithm (POSTA) is proposed. In POSTA, the performance of expansion operator, rotation operator and axesion operator is optimized through a parameter selection mechanism. But due to the insufficient utilization of historical information, POSTA still suffers from slow convergence speed and low solution accuracy on specific problems. To make better use of the historical information, Nelder-Mead (NM) simplex search and quadratic interpolation (QI) are integrated into POSTA. The enhanced POSTA is tested against 14 benchmark functions with 20-D, 30-D and 50-D space. An experimental comparison with several competitive metaheuristic methods demonstrates the effectiveness of the proposed method.

北京阿比特科技有限公司