亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we consider point sets of finite Desarguesian planes whose multisets of intersection numbers with lines is the same for all but one exceptional parallel class of lines. We call such sets regular of affine type. When the lines of the exceptional parallel class have the same intersection numbers, then we call these sets regular of pointed type. Classical examples are e.g. unitals; a detailed study and constructions of such sets with few intersection numbers is due to Hirschfeld and Sz\H{o}nyi from 1991. We here provide some general construction methods for regular sets and describe a few infinite families. The members of one of these families have the size of a unital and meet affine lines of $\mathrm{PG}(2, q^2)$ in one of $4$ possible intersection numbers, each of them congruent to $1$ modulo $\sqrt{q}$. As a byproduct, we determine the intersection sizes of the Hermitian curve defined over $\mathrm{GF}(q^2)$ with suitable rational curves of degree $\sqrt{q}$ and we obtain $\sqrt{q}$-divisible codes with $5$ non-zero weights. We also determine the weight enumerator of the codes arising from the general constructions modulus some $q$-powers.

相關內容

In this paper, we propose an alternating optimization method to address a time-optimal trajectory generation problem. Different from the existing solutions, our approach introduces a new formulation that minimizes the overall trajectory running time while maintaining the polynomial smoothness constraints and incorporating hard limits on motion derivatives to ensure feasibility. To address this problem, an alternating peak-optimization method is developed, which splits the optimization process into two sub-optimizations: the first sub-optimization optimizes polynomial coefficients for smoothness, and the second sub-optimization adjusts the time allocated to each trajectory segment. These are alternated until a feasible minimum-time solution is found. We offer a comprehensive set of simulations and experiments to showcase the superior performance of our approach in comparison to existing methods. A collection of demonstration videos with real drone flying experiments can be accessed at //www.youtube.com/playlist?list=PLQGtPFK17zUYkwFT-fr0a8E49R8Uq712l .

Transformer requires a fixed number of layers and heads which makes them inflexible to the complexity of individual samples and expensive in training and inference. To address this, we propose a sample-based Dynamic Hierarchical Transformer (DHT) model whose layers and heads can be dynamically configured with single data samples via solving contextual bandit problems. To determine the number of layers and heads, we use the Uniform Confidence Bound while we deploy combinatorial Thompson Sampling in order to select specific head combinations given their number. Different from previous work that focuses on compressing trained networks for inference only, DHT is not only advantageous for adaptively optimizing the underlying network architecture during training but also has a flexible network for efficient inference. To the best of our knowledge, this is the first comprehensive data-driven dynamic transformer without any additional auxiliary neural networks that implement the dynamic system. According to the experiment results, we achieve up to 74% computational savings for both training and inference with a minimal loss of accuracy.

In this article we shall discuss the theory of geodesics in information geometry, and an application in astrophysics. We will study how gradient flows in information geometry describe geodesics, explore the related mechanics by introducing a constraint, and apply our theory to Gaussian model and black hole thermodynamics. Thus, we demonstrate how deformation of gradient flows leads to more general Randers-Finsler metrics, describe Hamiltonian mechanics that derive from a constraint, and prove duality via canonical transformation. We also verified our theories for a deformation of the Gaussian model, and described dynamical evolution of flat metrics for Kerr and Reissner-Nordstr\"om black holes.

Implementation of many statistical methods for large, multivariate data sets requires one to solve a linear system that, depending on the method, is of the dimension of the number of observations or each individual data vector. This is often the limiting factor in scaling the method with data size and complexity. In this paper we illustrate the use of Krylov subspace methods to address this issue in a statistical solution to a source separation problem in cosmology where the data size is prohibitively large for direct solution of the required system. Two distinct approaches, adapted from techniques in the literature, are described: one that uses the method of conjugate gradients directly to the Kronecker-structured problem and another that reformulates the system as a Sylvester matrix equation. We show that both approaches produce an accurate solution within an acceptable computation time and with practical memory requirements for the data size that is currently available.

In 2020, Behr defined the problem of edge coloring of signed graphs and showed that every signed graph $(G, \sigma)$ can be colored using exactly $\Delta(G)$ or $\Delta(G) + 1$ colors, where $\Delta(G)$ is the maximum degree in graph $G$. In this paper, we focus on products of signed graphs. We recall the definitions of the Cartesian, tensor, strong, and corona products of signed graphs and prove results for them. In particular, we show that $(1)$ the Cartesian product of $\Delta$-edge-colorable signed graphs is $\Delta$-edge-colorable, $(2)$ the tensor product of a $\Delta$-edge-colorable signed graph and a signed tree requires only $\Delta$ colors and $(3)$ the corona product of almost any two signed graphs is $\Delta$-edge-colorable. We also prove some results related to the coloring of products of signed paths and cycles.

We note a fact that stiff systems or differential equations that have highly oscillatory solutions cannot be solved efficiently using conventional methods. In this paper, we study two new classes of exponential Runge-Kutta (ERK) integrators for efficiently solving stiff systems or highly oscillatory problems. We first present a novel class of explicit modified version of exponential Runge-Kutta (MVERK) methods based on the order conditions. Furthermore, we consider a class of explicit simplified version of exponential Runge-Kutta (SVERK) methods. Numerical results demonstrate the high efficiency of the explicit MVERK integrators and SVERK methods derived in this paper compared with the well-known explicit ERK integrators for stiff systems or highly oscillatory problems in the literature.

In this paper, two novel classes of implicit exponential Runge-Kutta (ERK) methods are studied for solving highly oscillatory systems. First of all, we analyze the symplectic conditions of two kinds of exponential integrators, and present a first-order symplectic method. In order to solve highly oscillatory problems, the highly accurate implicit ERK integrators (up to order four) are formulated by comparing the Taylor expansions of numerical and exact solutions, it is shown that the order conditions of two new kinds of exponential methods are identical to the order conditions of classical Runge-Kutta (RK) methods. Moreover, we investigate the linear stability properties of these exponential methods. Finally, numerical results not only present the long time energy preservation of the first-order symplectic method, but also illustrate the accuracy and efficiency of these formulated methods in comparison with standard ERK methods.

Reduced-order models have been widely adopted in fluid mechanics, particularly in the context of Newtonian fluid flows. These models offer the ability to predict complex dynamics, such as instabilities and oscillations, at a considerably reduced computational cost. In contrast, the reduced-order modeling of non-Newtonian viscoelastic fluid flows remains relatively unexplored. This work leverages the sparse identification of nonlinear dynamics algorithm to develop interpretable reduced-order models for viscoelastic flows. In particular, we explore a benchmark oscillatory viscoelastic flow on the four-roll mill geometry using the classical Oldroyd-B fluid. This flow exemplifies many canonical challenges associated with non-Newtonian flows, including transitions, asymmetries, instabilities, and bifurcations arising from the interplay of viscous and elastic forces, all of which require expensive computations in order to resolve the fast timescales and long transients characteristic of such flows. First, we demonstrate the effectiveness of our data-driven surrogate model to predict the transient evolution and accurately reconstruct the spatial flow field for fixed flow parameters. We then develop a fully parametric, nonlinear model capable of capturing the dynamic variations as a function of the Weissenberg number. While the training data is predominantly concentrated on a limit cycle regime for moderate Wi, we show that the parameterized model can be used to extrapolate, accurately predicting the dominant dynamics in the case of high Weissenberg numbers. The proposed methodology represents an initial step in the field of reduced-order modeling for viscoelastic flows with the potential to be further refined and enhanced for the design, optimization, and control of a wide range of non-Newtonian fluid flows using machine learning and reduced-order modeling techniques.

Minimal codes are linear codes where all non-zero codewords are minimal, i.e., whose support is not properly contained in the support of another codeword. The minimum possible length of such a $k$-dimensional linear code over $\mathbb{F}_q$ is denoted by $m(k,q)$. Here we determine $m(7,2)$, $m(8,2)$, and $m(9,2)$, as well as full classifications of all codes attaining $m(k,2)$ for $k\le 7$ and those attaining $m(9,2)$. For $m(11,2)$ and $m(12,2)$ we give improved upper bounds. It turns out that in many cases attaining extremal codes have the property that the weights of all codewords are divisible by some constant $\Delta>1$. So, here we study the minimum lengths of minimal codes where we additionally assume that the weights of the codewords are divisible by $\Delta$.

We study the impact of merging routines in merge-based sorting algorithms. More precisely, we focus on the galloping routine that TimSort uses to merge monotonic sub-arrays, hereafter called runs, and on the impact on the number of element comparisons performed if one uses this routine instead of a na\"ive merging routine. This routine was introduced in order to make TimSort more efficient on arrays with few distinct values. Alas, we prove that, although it makes TimSort sort array with two values in linear time, it does not prevent TimSort from requiring up to $\Theta(n \log(n))$ element comparisons to sort arrays of length~$n$ with three distinct values. However, we also prove that slightly modifying TimSort's galloping routine results in requiring only $\mathcal{O}(n + n \log(\sigma))$ element comparisons in the worst case, when sorting arrays of length $n$ with $\sigma$ distinct values. We do so by focusing on the notion of dual runs, which was introduced in the 1990s, and on the associated dual run-length entropy. This notion is both related to the number of distinct values and to the number of runs in an array, which came with its own run-length entropy that was used to explain TimSort's otherwise "supernatural" efficiency. We also introduce new notions of fast- and middle-growth for natural merge sorts (i.e., algorithms based on merging runs), which are found in several merge sorting algorithms similar to TimSort. We prove that algorithms with the fast- or middle-growth property, provided that they use our variant of TimSort's galloping routine for merging runs, are as efficient as possible at sorting arrays with low run-induced or dual-run-induced complexities.

北京阿比特科技有限公司