亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

As a mathematical model of high-speed flow and shock wave propagation in a complex multimaterial setting, Lagrangian hydrodynamics is characterized by moving meshes, advection-dominated solutions, and moving shock fronts with sharp gradients. These challenges hinder the existing projection-based model reduction schemes from being practical. We develop several variations of projection-based reduced order model techniques for Lagrangian hydrodynamics by introducing three different reduced bases for position, velocity, and energy fields. A time-windowing approach is also developed to address the challenge imposed by the advection-dominated solutions. Lagrangian hydrodynamics is formulated as a nonlinear problem, which requires a proper hyper-reduction technique. Therefore, we apply the over-sampling DEIM and SNS approaches to reduce the complexity due to the nonlinear terms. Finally, we also present both a posteriori and a priori error bounds associated with our reduced order model. We compare the performance of the spatial and time-windowing reduced order modeling approaches in terms of accuracy and speed-up with respect to the corresponding full order model for several numerical examples, namely Sedov blast, Gresho vortices, Taylor-Green vortices, and triple-point problems.

相關內容

Logs, being run-time information automatically generated by software, record system events and activities with their timestamps. Before obtaining more insights about the run-time status of the software, a fundamental step of log analysis, called log parsing, is employed to extract structured templates and parameters from the semi-structured raw log messages. However, current log parsers regard each message as a character string, ignoring the semantic information included in parameters and templates. Thus, we propose the semantic parser SemParser to unlock the critical bottleneck of mining semantics from log messages. It contains two steps, an end-to-end semantic miner and a joint parser. Specifically, the first step aims to identify explicit semantics inside a single log, and the second step is responsible for jointly inferring implicit semantics and computing structural outputs based on the contextual knowledge base. To analyze the effectiveness of our semantic parser, we first demonstrate that it can derive rich semantics from log messages collected from seven widely-applied systems with an average F1 score of 0.987. Then, we conduct two representative downstream tasks, showing that current downstream techniques improve their performance with appropriately extracted semantics by 11.7% and 8.65% in anomaly detection and failure diagnosis tasks, respectively. We believe these findings provide insights into semantically understanding log messages for the log analysis community.

We develop a probabilistic characterisation of trajectorial expansion rates in non-autonomous stochastic dynamical systems that can be defined over a finite time interval and used for the subsequent uncertainty quantification in Lagrangian (trajectory-based) predictions. These expansion rates are quantified via certain divergences (pre-metrics) between probability measures induced by the laws of the stochastic flow associated with the underlying dynamics. We construct scalar fields of finite-time divergence/expansion rates, show their existence and space-time continuity for general stochastic flows. Combining these divergence rate fields with our 'information inequalities' derived in allows for quantification and mitigation of the uncertainty in path-based observables estimated from simplified models in a way that is amenable to algorithmic implementations, and it can be utilised in information-geometric analysis of statistical estimation and inference, as well as in a data-driven machine/deep learning of coarse-grained models. We also derive a link between the divergence rates and finite-time Lyapunov exponents for probability measures and for path-based observables.

Efficient omission of symmetric solution candidates is essential for combinatorial problem-solving. Most of the existing approaches are instance-specific and focus on the automatic computation of Symmetry Breaking Constraints (SBCs) for each given problem instance. However, the application of such approaches to large-scale instances or advanced problem encodings might be problematic since the computed SBCs are propositional and, therefore, can neither be meaningfully interpreted nor transferred to other instances. As a result, a time-consuming recomputation of SBCs must be done before every invocation of a solver. To overcome these limitations, we introduce a new model-oriented approach for Answer Set Programming that lifts the SBCs of small problem instances into a set of interpretable first-order constraints using the Inductive Logic Programming paradigm. Experiments demonstrate the ability of our framework to learn general constraints from instance-specific SBCs for a collection of combinatorial problems. The obtained results indicate that our approach significantly outperforms a state-of-the-art instance-specific method as well as the direct application of a solver.

A framework is presented for fitting inverse problem models via variational Bayes approximations. This methodology guarantees flexibility to statistical model specification for a broad range of applications, good accuracy performances and reduced model fitting times, when compared with standard Markov chain Monte Carlo methods. The message passing and factor graph fragment approach to variational Bayes we describe facilitates streamlined implementation of approximate inference algorithms and forms the basis to software development. Such approach allows for supple inclusion of numerous response distributions and penalizations into the inverse problem model. Even though our work is circumscribed to one- and two-dimensional response variables, we lay down an infrastructure where efficient algorithm updates based on nullifying weak interactions between variables can also be derived for inverse problems in higher dimensions. Image processing applications motivated by biomedical and archaeological problems are included as illustrations.

Multivariate Analysis (MVA) comprises a family of well-known methods for feature extraction which exploit correlations among input variables representing the data. One important property that is enjoyed by most such methods is uncorrelation among the extracted features. Recently, regularized versions of MVA methods have appeared in the literature, mainly with the goal to gain interpretability of the solution. In these cases, the solutions can no longer be obtained in a closed manner, and more complex optimization methods that rely on the iteration of two steps are frequently used. This paper recurs to an alternative approach to solve efficiently this iterative problem. The main novelty of this approach lies in preserving several properties of the original methods, most notably the uncorrelation of the extracted features. Under this framework, we propose a novel method that takes advantage of the l-21 norm to perform variable selection during the feature extraction process. Experimental results over different problems corroborate the advantages of the proposed formulation in comparison to state of the art formulations.

The key challenge in learning dense correspondences lies in the lack of ground-truth matches for real image pairs. While photometric consistency losses provide unsupervised alternatives, they struggle with large appearance changes, which are ubiquitous in geometric and semantic matching tasks. Moreover, methods relying on synthetic training pairs often suffer from poor generalisation to real data. We propose Warp Consistency, an unsupervised learning objective for dense correspondence regression. Our objective is effective even in settings with large appearance and view-point changes. Given a pair of real images, we first construct an image triplet by applying a randomly sampled warp to one of the original images. We derive and analyze all flow-consistency constraints arising between the triplet. From our observations and empirical results, we design a general unsupervised objective employing two of the derived constraints. We validate our warp consistency loss by training three recent dense correspondence networks for the geometric and semantic matching tasks. Our approach sets a new state-of-the-art on several challenging benchmarks, including MegaDepth, RobotCar and TSS. Code and models will be released at //github.com/PruneTruong/DenseMatching.

Node classification is an important problem in graph data management. It is commonly solved by various label propagation methods that work iteratively starting from a few labeled seed nodes. For graphs with arbitrary compatibilities between classes, these methods crucially depend on knowing the compatibility matrix that must be provided by either domain experts or heuristics. Can we instead directly estimate the correct compatibilities from a sparsely labeled graph in a principled and scalable way? We answer this question affirmatively and suggest a method called distant compatibility estimation that works even on extremely sparsely labeled graphs (e.g., 1 in 10,000 nodes is labeled) in a fraction of the time it later takes to label the remaining nodes. Our approach first creates multiple factorized graph representations (with size independent of the graph) and then performs estimation on these smaller graph sketches. We define algebraic amplification as the more general idea of leveraging algebraic properties of an algorithm's update equations to amplify sparse signals. We show that our estimator is by orders of magnitude faster than an alternative approach and that the end-to-end classification accuracy is comparable to using gold standard compatibilities. This makes it a cheap preprocessing step for any existing label propagation method and removes the current dependence on heuristics.

For neural networks (NNs) with rectified linear unit (ReLU) or binary activation functions, we show that their training can be accomplished in a reduced parameter space. Specifically, the weights in each neuron can be trained on the unit sphere, as opposed to the entire space, and the threshold can be trained in a bounded interval, as opposed to the real line. We show that the NNs in the reduced parameter space are mathematically equivalent to the standard NNs with parameters in the whole space. The reduced parameter space shall facilitate the optimization procedure for the network training, as the search space becomes (much) smaller. We demonstrate the improved training performance using numerical examples.

We propose a new method of estimation in topic models, that is not a variation on the existing simplex finding algorithms, and that estimates the number of topics K from the observed data. We derive new finite sample minimax lower bounds for the estimation of A, as well as new upper bounds for our proposed estimator. We describe the scenarios where our estimator is minimax adaptive. Our finite sample analysis is valid for any number of documents (n), individual document length (N_i), dictionary size (p) and number of topics (K), and both p and K are allowed to increase with n, a situation not handled well by previous analyses. We complement our theoretical results with a detailed simulation study. We illustrate that the new algorithm is faster and more accurate than the current ones, although we start out with a computational and theoretical disadvantage of not knowing the correct number of topics K, while we provide the competing methods with the correct value in our simulations.

For the challenging semantic image segmentation task the most efficient models have traditionally combined the structured modelling capabilities of Conditional Random Fields (CRFs) with the feature extraction power of CNNs. In more recent works however, CRF post-processing has fallen out of favour. We argue that this is mainly due to the slow training and inference speeds of CRFs, as well as the difficulty of learning the internal CRF parameters. To overcome both issues we propose to add the assumption of conditional independence to the framework of fully-connected CRFs. This allows us to reformulate the inference in terms of convolutions, which can be implemented highly efficiently on GPUs. Doing so speeds up inference and training by a factor of more then 100. All parameters of the convolutional CRFs can easily be optimized using backpropagation. To facilitating further CRF research we make our implementation publicly available. Please visit: //github.com/MarvinTeichmann/ConvCRF

北京阿比特科技有限公司