亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Modeling correctly the transport of neutrinos is crucial in some astrophysical scenarios such as core-collapse supernovae and binary neutron star mergers. In this paper, we focus on the truncated-moment formalism, considering only the first two moments (M1 scheme) within the grey approximation, which reduces Boltzmann seven-dimensional equation to a system of $3+1$ equations closely resembling the hydrodynamic ones. Solving the M1 scheme is still mathematically challenging, since it is necessary to model the radiation-matter interaction in regimes where the evolution equations become stiff and behave as an advection-diffusion problem. Here, we present different global, high-order time integration schemes based on Implicit-Explicit Runge-Kutta (IMEX) methods designed to overcome the time-step restriction caused by such behavior while allowing us to use the explicit RK commonly employed for the MHD and Einstein equations. Finally, we analyze their performance in several numerical tests.

相關內容

This research project investigates Lenia, an artificial life platform that simulates ecosystems of digital creatures. Lenia's ecosystem consists of simple, artificial organisms that can move, consume, grow, and reproduce. The platform is important as a tool for studying artificial life and evolution, as it provides a scalable and flexible environment for creating a diverse range of organisms with varying abilities and behaviors. Measuring complexity in Lenia is a key aspect of the study, which identifies the metrics for measuring long-term complex emerging behavior of rules, with the aim of evolving better Lenia behaviors which are yet not discovered. The Genetic Algorithm uses neighborhoods or kernels as genotype while keeping the rest of the parameters of Lenia as fixed, for example growth function, to produce different behaviors respective to the population and then measures fitness value to decide the complexity of the resulting behavior. First, we use Variation over Time as a fitness function where higher variance between the frames are rewarded. Second, we use Auto-encoder based fitness where variation of the list of reconstruction loss for the frames is rewarded. Third, we perform combined fitness where higher variation of the pixel density of reconstructed frames is rewarded. All three experiments are tweaked with pixel alive threshold and frames used. Finally, after performing nine experiments of each fitness for 500 generations, we pick configurations from all experiments such that there is a scope of further evolution, and run it for 2500 generations. Results show that the kernel's center of mass increases with a specific set of pixels and together with borders the kernel try to achieve a Gaussian distribution.

In cut sparsification, all cuts of a hypergraph $H=(V,E,w)$ are approximated within $1\pm\epsilon$ factor by a small hypergraph $H'$. This widely applied method was generalized recently to a setting where the cost of cutting each $e\in E$ is provided by a splitting function, $g_e: 2^e\to\mathbb{R}_+$. This generalization is called a submodular hypergraph when the functions $\{g_e\}_{e\in E}$ are submodular, and it arises in machine learning, combinatorial optimization, and algorithmic game theory. Previous work focused on the setting where $H'$ is a reweighted sub-hypergraph of $H$, and measured size by the number of hyperedges in $H'$. We study such sparsification, and also a more general notion of representing $H$ succinctly, where size is measured in bits. In the sparsification setting, where size is the number of hyperedges, we present three results: (i) all submodular hypergraphs admit sparsifiers of size polynomial in $n=|V|$; (ii) monotone-submodular hypergraphs admit sparsifiers of size $O(\epsilon^{-2} n^3)$; and (iii) we propose a new parameter, called spread, to obtain even smaller sparsifiers in some cases. In the succinct-representation setting, we show that a natural family of splitting functions admits a succinct representation of much smaller size than via reweighted subgraphs (almost by factor $n$). This large gap is surprising because for graphs, the most succinct representation is attained by reweighted subgraphs. Along the way, we introduce the notion of deformation, where $g_e$ is decomposed into a sum of functions of small description, and we provide upper and lower bounds for deformation of common splitting functions.

This paper is concerned with the designing, analyzing and implementing linear and nonlinear discretization scheme for the distributed optimal control problem (OCP) with the Cahn-Hilliard (CH) equation as constrained. We propose three difference schemes to approximate and investigate the solution behaviour of the OCP for the CH equation. We present the convergence analysis of the proposed discretization. We verify our findings by presenting numerical experiments.

We consider hypergraph network design problems where the goal is to construct a hypergraph satisfying certain properties. In graph network design problems, the number of edges in an arbitrary solution is at most the square of the number of vertices. In contrast, in hypergraph network design problems, the number of hyperedges in an arbitrary solution could be exponential in the number of vertices and hence, additional care is necessary to design polynomial-time algorithms. The central theme of this work is to show that certain hypergraph network design problems admit solutions with polynomial number of hyperedges and moreover, can be solved in strongly polynomial time. Our work improves on the previous fastest pseudo-polynomial run-time for these problems. In addition, we develop algorithms that return (near-)uniform hypergraphs as solutions. The hypergraph network design problems that we focus upon are splitting-off operation in hypergraphs, connectivity augmentation using hyperedges, and covering skew-supermodular functions using hyperedges. Our definition of the splitting-off operation in hypergraphs and our proof showing the existence of the operation using a strongly polynomial-time algorithm to compute it are likely to be of independent graph-theoretical interest.

In this article, we introduce and study a new integer sequence referred to as the higher order Mersenne sequence. The proposed sequence is analogous to the higher order Fibonacci numbers and closely associated with the Mersenne numbers. Here, we discuss various algebraic properties such as Binet's formula, Catalan's identity, d'Ocagne's identity, generating functions, finite and binomial sums, etc. of this new sequence, and some inter-relations with Mersenne and Jacobsthal numbers. Moreover, we study the sequence generated from the binomial transforms of the higher order Mersenne numbers and present the recurrence relation and algebraic properties of them. Lastly, we give matrix generators and tridiagonal matrix representation for higher order Mersenne numbers.

In this paper we prove convergence for contractive time discretisation schemes for semi-linear stochastic evolution equations with irregular Lipschitz nonlinearities, initial values, and additive or multiplicative Gaussian noise on $2$-smooth Banach spaces $X$. The leading operator $A$ is assumed to generate a strongly continuous semigroup $S$ on $X$, and the focus is on non-parabolic problems. The main result concerns convergence of the uniform strong error $$E_{k}^{\infty} := \Big(\mathbb{E} \sup_{j\in \{0, \ldots, N_k\}} \|U(t_j) - U^j\|_X^p\Big)^{1/p} \to 0\quad (k \to 0),$$ where $p \in [2,\infty)$, $U$ is the mild solution, $U^j$ is obtained from a time discretisation scheme, $k$ is the step size, and $N_k = T/k$ for final time $T>0$. This generalises previous results to a larger class of admissible nonlinearities and noise as well as rough initial data from the Hilbert space case to more general spaces. We present a proof based on a regularisation argument. Within this scope, we extend previous quantified convergence results for more regular nonlinearity and noise from Hilbert to $2$-smooth Banach spaces. The uniform strong error cannot be estimated in terms of the simpler pointwise strong error $$E_k := \bigg(\sup_{j\in \{0,\ldots,N_k\}}\mathbb{E} \|U(t_j) - U^{j}\|_X^p\bigg)^{1/p},$$ which most of the existing literature is concerned with. Our results are illustrated for a variant of the Schr\"odinger equation, for which previous convergence results were not applicable.

Computational methods for thermal radiative transfer problems exhibit high computational costs and a prohibitive memory footprint when the spatial and directional domains are finely resolved. A strategy to reduce such computational costs is dynamical low-rank approximation (DLRA), which represents and evolves the solution on a low-rank manifold, thereby significantly decreasing computational and memory requirements. Efficient discretizations for the DLRA evolution equations need to be carefully constructed to guarantee stability while enabling mass conservation. In this work, we focus on the Su-Olson closure and derive a stable discretization through an implicit coupling of energy and radiation density. Moreover, we propose a rank-adaptive strategy to preserve local mass conservation. Numerical results are presented which showcase the accuracy and efficiency of the proposed method.

One of the most studied extensions of the famous Traveling Salesperson Problem (TSP) is the {\sc Multiple TSP}: a set of $m\geq 1$ salespersons collectively traverses a set of $n$ cities by $m$ non-trivial tours, to minimize the total length of their tours. This problem can also be considered to be a variant of {\sc Uncapacitated Vehicle Routing} where the objective function is the sum of all tour lengths. When all $m$ tours start from a single common \emph{depot} $v_0$, then the metric {\sc Multiple TSP} can be approximated equally well as the standard metric TSP, as shown by Frieze (1983). The {\sc Multiple TSP} becomes significantly harder to approximate when there is a \emph{set} $D$ of $d \geq 1$ depots that form the starting and end points of the $m$ tours. For this case only a $(2-1/d)$-approximation in polynomial time is known, as well as a $3/2$-approximation for \emph{constant} $d$ which requires a prohibitive run time of $n^{\Theta(d)}$ (Xu and Rodrigues, \emph{INFORMS J. Comput.}, 2015). A recent work of Traub, Vygen and Zenklusen (STOC 2020) gives another approximation algorithm for {\sc Multiple TSP} running in time $n^{\Theta(d)}$ and reducing the problem to approximating TSP. In this paper we overcome the $n^{\Theta(d)}$ time barrier: we give the first efficient approximation algorithm for {\sc Multiple TSP} with a \emph{variable} number $d$ of depots that yields a better-than-2 approximation. Our algorithm runs in time $(1/\varepsilon)^{\mathcal O(d\log d)}\cdot n^{\mathcal O(1)}$, and produces a $(3/2+\varepsilon)$-approximation with constant probability. For the graphic case, we obtain a deterministic $3/2$-approximation in time $2^d\cdot n^{\mathcal O(1)}$.ithm for metric {\sc Multiple TSP} with run time $n^{\Theta(d)}$, which reduces the problem to approximating metric TSP.

The general consensus is that the Multiplicative Extended Kalman Filter (MEKF) is superior to the Additive Extended Kalman Filter (AEKF) based on a wealth of theoretical evidence. This paper deals with a practical comparison between the two filters in simulation with the goal of verifying if the previous theoretical foundations are true. The AEKF and MEKF are two variants of the Extended Kalman Filter that differ in their approach to linearizing the system dynamics. The AEKF uses an additive correction term to update the state estimate, while the MEKF uses a multiplicative correction term. The two also differ in the state of which they use. The AEKF uses the quaternion as its state while the MEKF uses the Gibbs vector as its state. The results show that the MEKF consistently outperforms the AEKF in terms of estimation accuracy with lower uncertainty. The AEKF is more computationally efficient, but the difference is so low that it is almost negligible and it has no effect on a real-time application. Overall, the results suggest that the MEKF is a better choise for satellite attitude estimation due to its superior estimation accuracy and lower uncertainty, which agrees with the statements from previous work

Artificial Intelligence (AI) is rapidly becoming integrated into military Command and Control (C2) systems as a strategic priority for many defence forces. The successful implementation of AI is promising to herald a significant leap in C2 agility through automation. However, realistic expectations need to be set on what AI can achieve in the foreseeable future. This paper will argue that AI could lead to a fragility trap, whereby the delegation of C2 functions to an AI could increase the fragility of C2, resulting in catastrophic strategic failures. This calls for a new framework for AI in C2 to avoid this trap. We will argue that antifragility along with agility should form the core design principles for AI-enabled C2 systems. This duality is termed Agile, Antifragile, AI-Enabled Command and Control (A3IC2). An A3IC2 system continuously improves its capacity to perform in the face of shocks and surprises through overcompensation from feedback during the C2 decision-making cycle. An A3IC2 system will not only be able to survive within a complex operational environment, it will also thrive, benefiting from the inevitable shocks and volatility of war.

北京阿比特科技有限公司