亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We analyze the spatial structure of asymptotics of a solution to a singularly perturbed system of mass transfer equations. The leading term of the asymptotics is described by a parabolic equation with possibly degenerate spatial part. We prove a theorem that establishes a relationship between the degree of degeneracy and the numbers of equations in the system and spatial variables in some particular cases. The work hardly depends on the calculation of the eigenvalues of matrices that determine the spatial structure of the asymptotics by the means of computer algebra system Wolfram Mathematica. We put forward a hypothesis on the existence of the found connection for an arbitrary number of equations and spatial variables.

相關內容

CASES:International Conference on Compilers, Architectures, and Synthesis for Embedded Systems。 Explanation:嵌入(ru)式系統編譯器、體系結構和綜合國際(ji)會議。 Publisher:ACM。 SIT:

In this work, we present a new high order Discontinuous Galerkin time integration scheme for second-order (in time) differential systems that typically arise from the space discretization of the elastodynamics equation. By rewriting the original equation as a system of first order differential equations we introduce the method and show that the resulting discrete formulation is well-posed, stable and retains super-optimal rate of convergence with respect to the discretization parameters, namely the time step and the polynomial approximation degree. A set of two- and three-dimensional numerical experiments confirm the theoretical bounds. Finally, the method is applied to real geophysical applications.

The recently developed physics-informed machine learning has made great progress for solving nonlinear partial differential equations (PDEs), however, it may fail to provide reasonable approximations to the PDEs with discontinuous solutions. In this paper, we focus on the discrete time physics-informed neural network (PINN), and propose a hybrid PINN scheme for the nonlinear PDEs. In this approach, the local solution structures are classified as smooth and nonsmooth scales by introducing a discontinuity indicator, and then the automatic differentiation technique is employed for resolving smooth scales, while an improved weighted essentially non-oscillatory (WENO) scheme is adopted to capture discontinuities. We then test the present approach by considering the viscous and inviscid Burgers equations , and it is shown that compared with original discrete time PINN, the present hybrid approach has a better performance in approximating the discontinuous solution even at a relatively larger time step.

In this work, we introduce an inverse averaging finite element method (IAFEM) for solving the size-modified Poisson-Nernst-Planck (SMPNP) equations. Comparing with the classical Poisson-Nernst-Planck (PNP) equations, the SMPNP equations add a nonlinear term to each of the Nernst-Planck (NP) fluxes to describe the steric repulsion which can treat multiple nonuniform particle sizes in simulations. Since the new terms include sums and gradients of ion concentrations, the nonlinear coupling of SMPNP equations is much stronger than that of PNP equations. By introducing a generalized Slotboom transform, each of the size-modified NP equation is transformed into a self-adjoint equation with exponentially behaved coefficient, which has similar simple form to the standard NP equation with the Slotboom transformation. This treatment enables employing our recently developed inverse averaging technique to deal with the exponential coefficients of the reformulated formulations, featured with advantages of numerical stability and flux conservation especially in strong nonlinear and convection-dominated cases. Comparing with previous stabilization methods, the IAFEM proposed in this paper can still possess the numerical stability when dealing with convection-dominated problems. And it is more concise and easier to be numerically implemented. Numerical experiments about a model problem with analytic solutions are presented to verify the accuracy and order of IAFEM for SMPNP equations. Studies about the size-effects of a sphere model and an ion channel system are presented to show that our IAFEM is more effective and robust than the traditional finite element method (FEM) when solving SMPNP equations in simulations of biological systems.

In settings as diverse as autonomous vehicles, cloud computing, and pandemic quarantines, requests for service can arrive in near or true simultaneity with one another. This creates batches of arrivals to the underlying queueing system. In this paper, we study the staffing problem for the batch arrival queue. We show that batches place a significant stress on services, and thus require a high amount of resources and preparation. In fact, we find that there is no economy of scale as the number of customers in each batch increases, creating a stark contrast with the square root safety staffing rules enjoyed by systems with solitary arrivals of customers. Furthermore, when customers arrive both quickly and in batches, an economy of scale can exist, but it is weaker than what is typically expected. Methodologically, these staffing results follow from novel large batch and hybrid large-batch-and-large-rate limits of the general multi-server queueing model. In the pure large batch limit, we establish the first formal connection between multi-server queues and storage processes, another family of stochastic processes. By consequence, we show that the limit of the batch scaled queue length process is not asymptotically normal, and that, in fact, the fluid and diffusion-type limits coincide. This is what drives our staffing analysis of the batch arrival queue, and what implies that the (safety) staffing of this system must be directly proportional to the batch size just to achieve a non-degenerate probability of customers waiting.

Human-in-the-loop aims to train an accurate prediction model with minimum cost by integrating human knowledge and experience. Humans can provide training data for machine learning applications and directly accomplish some tasks that are hard for computers in the pipeline with the help of machine-based approaches. In this paper, we survey existing works on human-in-the-loop from a data perspective and classify them into three categories with a progressive relationship: (1) the work of improving model performance from data processing, (2) the work of improving model performance through interventional model training, and (3) the design of the system independent human-in-the-loop. Using the above categorization, we summarize major approaches in the field, along with their technical strengths/ weaknesses, we have simple classification and discussion in natural language processing, computer vision, and others. Besides, we provide some open challenges and opportunities. This survey intends to provide a high-level summarization for human-in-the-loop and motivates interested readers to consider approaches for designing effective human-in-the-loop solutions.

Recommender Systems (RS) have employed knowledge distillation which is a model compression technique training a compact student model with the knowledge transferred from a pre-trained large teacher model. Recent work has shown that transferring knowledge from the teacher's intermediate layer significantly improves the recommendation quality of the student. However, they transfer the knowledge of individual representation point-wise and thus have a limitation in that primary information of RS lies in the relations in the representation space. This paper proposes a new topology distillation approach that guides the student by transferring the topological structure built upon the relations in the teacher space. We first observe that simply making the student learn the whole topological structure is not always effective and even degrades the student's performance. We demonstrate that because the capacity of the student is highly limited compared to that of the teacher, learning the whole topological structure is daunting for the student. To address this issue, we propose a novel method named Hierarchical Topology Distillation (HTD) which distills the topology hierarchically to cope with the large capacity gap. Our extensive experiments on real-world datasets show that the proposed method significantly outperforms the state-of-the-art competitors. We also provide in-depth analyses to ascertain the benefit of distilling the topology for RS.

This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.

The problem of Approximate Nearest Neighbor (ANN) search is fundamental in computer science and has benefited from significant progress in the past couple of decades. However, most work has been devoted to pointsets whereas complex shapes have not been sufficiently treated. Here, we focus on distance functions between discretized curves in Euclidean space: they appear in a wide range of applications, from road segments to time-series in general dimension. For $\ell_p$-products of Euclidean metrics, for any $p$, we design simple and efficient data structures for ANN, based on randomized projections, which are of independent interest. They serve to solve proximity problems under a notion of distance between discretized curves, which generalizes both discrete Fr\'echet and Dynamic Time Warping distances. These are the most popular and practical approaches to comparing such curves. We offer the first data structures and query algorithms for ANN with arbitrarily good approximation factor, at the expense of increasing space usage and preprocessing time over existing methods. Query time complexity is comparable or significantly improved by our algorithms, our algorithm is especially efficient when the length of the curves is bounded.

We introduce a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers, we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a black-box differential equation solver. These continuous-depth models have constant memory cost, adapt their evaluation strategy to each input, and can explicitly trade numerical precision for speed. We demonstrate these properties in continuous-depth residual networks and continuous-time latent variable models. We also construct continuous normalizing flows, a generative model that can train by maximum likelihood, without partitioning or ordering the data dimensions. For training, we show how to scalably backpropagate through any ODE solver, without access to its internal operations. This allows end-to-end training of ODEs within larger models.

Because of continuous advances in mathematical programing, Mix Integer Optimization has become a competitive vis-a-vis popular regularization method for selecting features in regression problems. The approach exhibits unquestionable foundational appeal and versatility, but also poses important challenges. We tackle these challenges, reducing computational burden when tuning the sparsity bound (a parameter which is critical for effectiveness) and improving performance in the presence of feature collinearity and of signals that vary in nature and strength. Importantly, we render the approach efficient and effective in applications of realistic size and complexity - without resorting to relaxations or heuristics in the optimization, or abandoning rigorous cross-validation tuning. Computational viability and improved performance in subtler scenarios is achieved with a multi-pronged blueprint, leveraging characteristics of the Mixed Integer Programming framework and by means of whitening, a data pre-processing step.

北京阿比特科技有限公司