亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We propose two implicit numerical schemes for the low-rank time integration of stiff nonlinear partial differential equations. Our approach uses the preconditioned Riemannian trust-region method of Absil, Baker, and Gallivan, 2007. We demonstrate the efficiency of our method for solving the Allen-Cahn and the Fisher-KPP equation on the manifold of fixed-rank matrices. Furthermore, our approach allows us to avoid the restriction on the time step typical of methods that use the fixed-point iteration to solve the inner nonlinear equations. Finally, we demonstrate the efficiency of the preconditioner on the same variational problems presented in Sutti and Vandereycken, 2021.

相關內容

Integration:Integration, the VLSI Journal。 Explanation:集成,VLSI雜志。 Publisher:Elsevier。 SIT:

Numerous researchers from various disciplines have explored commonalities and divergences in the evolution of complex social formations. Here, we explore whether there is a 'characteristic' time-course for the evolution of social complexity in a handful of different geographic areas. Data from the Seshat: Global History Databank is shifted so that the overlapping time series can be fitted to a single logistic regression model for all 23 geographic areas under consideration. The resulting regression shows convincing out-of-sample predictions and its period of extensive growth in social complexity can be identified via bootstrapping as a time interval of roughly 2500 years. To analyse the endogenous growth of social complexity, each time series is restricted to a central time interval without major disruptions in cultural or institutional continuity and both approaches result in a similar logistic regression curve. Our results suggest that these different areas have indeed experienced a similar course in the their evolution of social complexity, but that this is a lengthy process involving both internal developments and external influences.

We present compact semi-implicit finite difference schemes on structured grids for numerical solutions of the advection by an external velocity and by a speed in normal direction that are applicable in level set methods. The most involved numerical scheme is third order accurate for the linear advection with a space dependent velocity and unconditionally stable in the sense of von Neumann stability analysis. We also present a simple high-resolution scheme that gives a TVD (Total Variation Diminishing) approximation of the spatial derivative for the advected level set function. In the case of nonlinear advection, the semi-implicit discretization is proposed to linearize the problem. The compact form of implicit stencil in numerical schemes containing unknowns only in the upwind direction allows applications of efficient algebraic solvers like fast sweeping methods. Numerical tests to evolve a smooth and non-smooth interface and an example with a large variation of velocity confirm the good accuracy of the methods and fast convergence of the algebraic solver even in the case of very large Courant numbers.

Recommendation algorithm plays an important role in recommendation system (RS), which predicts users' interests and preferences for some given items based on their known information. Recently, a recommendation algorithm based on the graph Laplacian regularization was proposed, which treats the prediction problem of the recommendation system as a reconstruction issue of small samples of the graph signal under the same graph model. Such a technique takes into account both known and unknown labeled samples information, thereby obtaining good prediction accuracy. However, when the data size is large, solving the reconstruction model is computationally expensive even with an approximate strategy. In this paper, we propose an equivalent reconstruction model that can be solved exactly with extremely low computational cost. Finally, a final prediction algorithm is proposed. We find in the experiments that the proposed method significantly reduces the computational cost while maintaining a good prediction accuracy.

Robot swarm is a hot spot in robotic research community. In this paper, we propose a decentralized framework for car-like robotic swarm which is capable of real-time planning in cluttered environments. In this system, path finding is guided by environmental topology information to avoid frequent topological change, and search-based speed planning is leveraged to escape from infeasible initial value's local minima. Then spatial-temporal optimization is employed to generate a safe, smooth and dynamically feasible trajectory. During optimization, the trajectory is discretized by fixed time steps. Penalty is imposed on the signed distance between agents to realize collision avoidance, and differential flatness cooperated with limitation on front steer angle satisfies the non-holonomic constraints. With trajectories broadcast to the wireless network, agents are able to check and prevent potential collisions. We validate the robustness of our system in simulation and real-world experiments. Code will be released as open-source packages.

Statistical data by their very nature are indeterminate in the sense that if one repeated the process of collecting the data the new data set would be somewhat different from the original. Therefore, a statistical method, a map $\Phi$ taking a data set $x$ to a point in some space F, should be stable at $x$: Small perturbations in $x$ should result in a small change in $\Phi(x)$. Otherwise, $\Phi$ is useless at $x$ or -- and this is important -- near $x$. So one doesn't want $\Phi$ to have "singularities," data sets $x$ such that the the limit of $\Phi(y)$ as $y$ approaches $x$ doesn't exist. (Yes, the same issue arises elsewhere in applied math.) However, broad classes of statistical methods have topological obstructions of continuity: They must have singularities. We show why and give lower bounds on the Hausdorff dimension, even Hausdorff measure, of the set of singularities of such data maps. There seem to be numerous examples. We apply mainly topological methods to study the (topological) singularities of functions defined (on dense subsets of) "data spaces" and taking values in spaces with nontrivial homology. At least in this book, data spaces are usually compact manifolds. The purpose is to gain insight into the numerical conditioning of statistical description, data summarization, and inference and learning methods. We prove general results that can often be used to bound below the dimension of the singular set. We apply our topological results to develop lower bounds on Hausdorff measure of the singular set. We apply these methods to the study of plane fitting and measuring location of data on spheres. This is not a "final" version, merely another attempt.

Polymer flooding is crucial in hydrocarbon production, increasing oil recovery by improving the water-oil mobility ratio. However, the high viscosity of displacing fluid may cause problems with sand production on poorly consolidated reservoirs. This work investigates the effect of polymer injection on the sand production phenomenon using the experimental study and numerical model at a laboratory scale. The experiment uses an artificially made sandstone based on the characteristics of the oil field in Kazakhstan. Polymer solution based on Xanthan gum is injected into the core to study the impact of polymer flooding on sand production. The rheology of the polymer solution is also examined using a rotational rheometer, and the power-law model fits outcomes. We observe no sand production during the brine injection at various flow rate ranges. However, the sanding is noticed when the polymer solution is injected. More than 50% of cumulatively produced sand is obtained after one pore volume of polymer sand is injected. In the numerical part of the study, we present a coupling model of DEM with CFD to describe the polymer flow in a granular porous medium. In the solid phase, the modified cohesive contact model characterizes the bonding mechanism between sand particles. The fluid phase is modeled as a non-Newtonian fluid using a power-law model. We verify the numerical model with the laboratory experiment result. The numerical model observes non-uniform bond breakage when only a confining stress is applied. Alternatively, the injection of the polymer into the sample leads to a relatively gradual decrease in bonds. The significant difference in the pressure of the fluid results in its higher velocity, which causes intensive sand production at the beginning of the simulation. The ratio of medium-sized produced particles is greater than the initial ratio of those before injection.

To integrate large systems of nonlinear differential equations in time, we consider a variant of nonlinear waveform relaxation (also known as dynamic iteration or Picard-Lindel\"of iteration), where at each iteration a linear inhomogeneous system of differential equations has to be solved. This is done by the exponential block Krylov subspace (EBK) method. Thus, we have an inner-outer iterative method, where iterative approximations are determined over a certain time interval, with no time stepping involved. This approach has recently been shown to be efficient as a time-parallel integrator within the PARAEXP framework. In this paper, convergence behavior of this method is assessed theoretically and practically. We examine efficiency of the method by testing it on nonlinear Burgers and Liouville-Bratu-Gelfand equations and comparing its performance with that of conventional time-stepping integrators.

Variable selection is a procedure to attain the truly important predictors from inputs. Complex nonlinear dependencies and strong coupling pose great challenges for variable selection in high-dimensional data. In addition, real-world applications have increased demands for interpretability of the selection process. A pragmatic approach should not only attain the most predictive covariates, but also provide ample and easy-to-understand grounds for removing certain covariates. In view of these requirements, this paper puts forward an approach for transparent and nonlinear variable selection. In order to transparently decouple information within the input predictors, a three-step heuristic search is designed, via which the input predictors are grouped into four subsets: the relevant to be selected, and the uninformative, redundant, and conditionally independent to be removed. A nonlinear partial correlation coefficient is introduced to better identify the predictors which have nonlinear functional dependence with the response. The proposed method is model-free and the selected subset can be competent input for commonly used predictive models. Experiments demonstrate the superior performance of the proposed method against the state-of-the-art baselines in terms of prediction accuracy and model interpretability.

To comprehend complex systems with multiple states, it is imperative to reveal the identity of these states by system outputs. Nevertheless, the mathematical models describing these systems often exhibit nonlinearity so that render the resolution of the parameter inverse problem from the observed spatiotemporal data a challenging endeavor. Starting from the observed data obtained from such systems, we propose a novel framework that facilitates the investigation of parameter identification for multi-state systems governed by spatiotemporal varying parametric partial differential equations. Our framework consists of two integral components: a constrained self-adaptive physics-informed neural network, encompassing a sub-network, as our methodology for parameter identification, and a finite mixture model approach to detect regions of probable parameter variations. Through our scheme, we can precisely ascertain the unknown varying parameters of the complex multi-state system, thereby accomplishing the inversion of the varying parameters. Furthermore, we have showcased the efficacy of our framework on two numerical cases: the 1D Burgers' equation with time-varying parameters and the 2D wave equation with a space-varying parameter.

The conjoining of dynamical systems and deep learning has become a topic of great interest. In particular, neural differential equations (NDEs) demonstrate that neural networks and differential equation are two sides of the same coin. Traditional parameterised differential equations are a special case. Many popular neural network architectures, such as residual networks and recurrent networks, are discretisations. NDEs are suitable for tackling generative problems, dynamical systems, and time series (particularly in physics, finance, ...) and are thus of interest to both modern machine learning and traditional mathematical modelling. NDEs offer high-capacity function approximation, strong priors on model space, the ability to handle irregular data, memory efficiency, and a wealth of available theory on both sides. This doctoral thesis provides an in-depth survey of the field. Topics include: neural ordinary differential equations (e.g. for hybrid neural/mechanistic modelling of physical systems); neural controlled differential equations (e.g. for learning functions of irregular time series); and neural stochastic differential equations (e.g. to produce generative models capable of representing complex stochastic dynamics, or sampling from complex high-dimensional distributions). Further topics include: numerical methods for NDEs (e.g. reversible differential equations solvers, backpropagation through differential equations, Brownian reconstruction); symbolic regression for dynamical systems (e.g. via regularised evolution); and deep implicit models (e.g. deep equilibrium models, differentiable optimisation). We anticipate this thesis will be of interest to anyone interested in the marriage of deep learning with dynamical systems, and hope it will provide a useful reference for the current state of the art.

北京阿比特科技有限公司