亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Dynamic Mode Decomposition (DMD) is a powerful data-driven method used to extract spatio-temporal coherent structures that dictate a given dynamical system. The method consists of stacking collected temporal snapshots into a matrix and mapping the nonlinear dynamics using a linear operator. The standard procedure considers that snapshots possess the same dimensionality for all the observable data. However, this often does not occur in numerical simulations with adaptive mesh refinement/coarsening schemes (AMR/C). This paper proposes a strategy to enable DMD to extract features from observations with different mesh topologies and dimensions, such as those found in AMR/C simulations. For this purpose, the adaptive snapshots are projected onto the same reference function space, enabling the use of snapshot-based methods such as DMD. The present strategy is applied to challenging AMR/C simulations: a continuous diffusion-reaction epidemiological model for COVID-19, a density-driven gravity current simulation, and a bubble rising problem. We also evaluate the DMD efficiency to reconstruct the dynamics and some relevant quantities of interest. In particular, for the SEIRD model and the bubble rising problem, we evaluate DMD's ability to extrapolate in time (short-time future estimates).

相關內容

We investigate the problem of co-designing computation and communication in a multi-agent system (e.g. a sensor network or a multi-robot team). We consider the realistic setting where each agent acquires sensor data and is capable of local processing before sending updates to a base station, which is in charge of making decisions or monitoring phenomena of interest in real time. Longer processing at an agent leads to more informative updates but also larger delays, giving rise to a delay-accuracy-tradeoff in choosing the right amount of local processing at each agent. We assume that the available communication resources are limited due to interference, bandwidth, and power constraints. Thus, a scheduling policy needs to be designed to suitably share the communication channel among the agents. To that end, we develop a general formulation to jointly optimize the local processing at the agents and the scheduling of transmissions. Our novel formulation leverages the notion of Age of Information to quantify the freshness of data and capture the delays caused by computation and communication. We develop efficient resource allocation algorithms using the Whittle index approach and demonstrate our proposed algorithms in two practical applications: multi-agent occupancy grid mapping in time-varying environments, and ride sharing in autonomous vehicle networks. Our experiments show that the proposed co-design approach leads to a substantial performance improvement (18-82% in our tests).

We consider the mathematical analysis and numerical approximation of a system of nonlinear partial differential equations that arises in models that have relevance to steady isochoric flows of colloidal suspensions. The symmetric velocity gradient is assumed to be a monotone nonlinear function of the deviatoric part of the Cauchy stress tensor. We prove the existence of a unique weak solution to the problem, and under the additional assumption that the nonlinearity involved in the constitutive relation is Lipschitz continuous we also prove uniqueness of the weak solution. We then construct mixed finite element approximations of the system using both conforming and nonconforming finite element spaces. For both of these we prove the convergence of the method to the unique weak solution of the problem, and in the case of the conforming method we provide a bound on the error between the analytical solution and its finite element approximation in terms of the best approximation error from the finite element spaces. We propose first a Lions-Mercier type iterative method and next a classical fixed-point algorithm to solve the finite-dimensional problems resulting from the finite element discretisation of the system of nonlinear partial differential equations under consideration and present numerical experiments that illustrate the practical performance of the proposed numerical method.

In this paper we study asymptotic properties of the maximum likelihood estimator (MLE) for the speed of a stochastic wave equation. We follow a well-known spectral approach to write the solution as a Fourier series, then we project the solution to a $N$-finite dimensional space and find the estimator as a function of the time and $N$. We then show consistency of the MLE using classical stochastic analysis. Afterward we prove the asymptotic normality using the Malliavin-Stein method. We also study asymptotic properties of a discretized version of the MLE for the parameter. We provide this asymptotic analysis of the proposed estimator as the number of Fourier modes, $N$, used in the estimation and the observation time go to infinity. Finally, we illustrate the theoretical results with some numerical experiments.

Optimal experimental design (OED) is the general formalism of sensor placement and decisions about the data collection strategy for engineered or natural experiments. This approach is prevalent in many critical fields such as battery design, numerical weather prediction, geosciences, and environmental and urban studies. State-of-the-art computational methods for experimental design, however, do not accommodate correlation structure in observational errors produced by many expensive-to-operate devices such as X-ray machines, radars, and satellites. Discarding evident data correlations leads to biased results, higher expenses, and waste of valuable resources. We present a general formulation of the OED formalism for model-constrained large-scale Bayesian linear inverse problems, where measurement errors are generally correlated. The proposed approach utilizes the Hadamard product of matrices to formulate the weighted likelihood and is valid for both finite- and infinite-dimensional Bayesian inverse problems. Extensive numerical experiments are carried out for empirical verification of the proposed approach using an advection-diffusion model, where the objective is to optimally place a small set of sensors, under a limited budget, to predict the concentration of a contaminant in a closed and bounded domain.

Location Routing is a fundamental planning problem in logistics, in which strategic location decisions on the placement of facilities (depots, distribution centers, warehouses etc.) are taken based on accurate estimates of operational routing costs. We present an approximation algorithm, i.e., an algorithm with proven worst-case guarantees both in terms of running time and solution quality, for the general capacitated version of this problem, in which both vehicles and facilities are capacitated. Before, such algorithms were only known for the special case where facilities are uncapacitated or where their capacities can be extended arbitrarily at linear cost. Previously established lower bounds that are known to approximate the optimal solution value well in the uncapacitated case can be off by an arbitrary factor in the general case. We show that this issue can be overcome by a bifactor approximation algorithm that may slightly exceed facility capacities by an adjustable, arbitrarily small margin while approximating the optimal cost by a constant factor. In addition to these proven worst-case guarantees, we also assess the practical performance of our algorithm in a comprehensive computational study, showing that the approach allows efficient computation of near-optimal solutions for instance sizes beyond the reach of current state-of-the-art heuristics.

Motivated by recent developments in designing algorithms based on individual item scores for solving utility maximization problems, we study the framework of using test scores, defined as a statistic of observed individual item performance data, for solving the budgeted stochastic utility maximization problem. We extend an existing scoring mechanism, namely the replication test scores, to incorporate heterogeneous item costs as well as item values. We show that a natural greedy algorithm that selects items solely based on their replication test scores outputs solutions within a constant factor of the optimum for a broad class of utility functions. Our algorithms and approximation guarantees assume that test scores are noisy estimates of certain expected values with respect to marginal distributions of individual item values, thus making our algorithms practical and extending previous work that assumes noiseless estimates. Moreover, we show how our algorithm can be adapted to the setting where items arrive in a streaming fashion while maintaining the same approximation guarantee. We present numerical results, using synthetic data and data sets from the Academia.StackExchange Q&A forum, which show that our test score algorithm can achieve competitiveness, and in some cases better performance than a benchmark algorithm that requires access to a value oracle to evaluate function values.

Candidates arrive sequentially for an interview process which results in them being ranked relative to their predecessors. Based on the ranks available at each time, one must develop a decision mechanism that selects or dismisses the current candidate in an effort to maximize the chance to select the best. This classical version of the ``Secretary problem'' has been studied in depth using mostly combinatorial approaches, along with numerous other variants. In this work we consider a particular new version where during reviewing one is allowed to query an external expert to improve the probability of making the correct decision. Unlike existing formulations, we consider experts that are not necessarily infallible and may provide suggestions that can be faulty. For the solution of our problem we adopt a probabilistic methodology and view the querying times as consecutive stopping times which we optimize with the help of optimal stopping theory. For each querying time we must also design a mechanism to decide whether we should terminate the search at the querying time or not. This decision is straightforward under the usual assumption of infallible experts but, when experts are faulty, it has a far more intricate structure.

We present the first method for real-time full body capture that estimates shape and motion of body and hands together with a dynamic 3D face model from a single color image. Our approach uses a new neural network architecture that exploits correlations between body and hands at high computational efficiency. Unlike previous works, our approach is jointly trained on multiple datasets focusing on hand, body or face separately, without requiring data where all the parts are annotated at the same time, which is much more difficult to create at sufficient variety. The possibility of such multi-dataset training enables superior generalization ability. In contrast to earlier monocular full body methods, our approach captures more expressive 3D face geometry and color by estimating the shape, expression, albedo and illumination parameters of a statistical face model. Our method achieves competitive accuracy on public benchmarks, while being significantly faster and providing more complete face reconstructions.

Motion artifacts are a primary source of magnetic resonance (MR) image quality deterioration with strong repercussions on diagnostic performance. Currently, MR motion correction is carried out either prospectively, with the help of motion tracking systems, or retrospectively by mainly utilizing computationally expensive iterative algorithms. In this paper, we utilize a novel adversarial framework, titled MedGAN, for the joint retrospective correction of rigid and non-rigid motion artifacts in different body regions and without the need for a reference image. MedGAN utilizes a unique combination of non-adversarial losses and a novel generator architecture to capture the textures and fine-detailed structures of the desired artifacts-free MR images. Quantitative and qualitative comparisons with other adversarial techniques have illustrated the proposed model's superior performance.

Seam-cutting and seam-driven techniques have been proven effective for handling imperfect image series in image stitching. Generally, seam-driven is to utilize seam-cutting to find a best seam from one or finite alignment hypotheses based on a predefined seam quality metric. However, the quality metrics in most methods are defined to measure the average performance of the pixels on the seam without considering the relevance and variance among them. This may cause that the seam with the minimal measure is not optimal (perception-inconsistent) in human perception. In this paper, we propose a novel coarse-to-fine seam estimation method which applies the evaluation in a different way. For pixels on the seam, we develop a patch-point evaluation algorithm concentrating more on the correlation and variation of them. The evaluations are then used to recalculate the difference map of the overlapping region and reestimate a stitching seam. This evaluation-reestimation procedure iterates until the current seam changes negligibly comparing with the previous seams. Experiments show that our proposed method can finally find a nearly perception-consistent seam after several iterations, which outperforms the conventional seam-cutting and other seam-driven methods.

北京阿比特科技有限公司