亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Compatible finite element discretisations for the atmospheric equations of motion have recently attracted considerable interest. Semi-implicit timestepping methods require the repeated solution of a large saddle-point system of linear equations. Preconditioning this system is challenging since the velocity mass matrix is non-diagonal, leading to a dense Schur complement. Hybridisable discretisations overcome this issue: weakly enforcing continuity of the velocity field with Lagrange multipliers leads to a sparse system of equations, which has a similar structure to the pressure Schur complement in traditional approaches. We describe how the hybridised sparse system can be preconditioned with a non-nested two-level preconditioner. To solve the coarse system, we use the multigrid pressure solver that is employed in the approximate Schur complement method previously proposed by the some of the authors. Our approach significantly reduces the number of solver iterations. The method shows excellent performance and scales to large numbers of cores in the Met Office next-generation climate- and weather prediction model LFRic.

相關內容

讓 iOS 8 和 OS X Yosemite 無縫切換的一個新特性。 > Apple products have always been designed to work together beautifully. But now they may really surprise you. With iOS 8 and OS X Yosemite, you’ll be able to do more wonderful things than ever before.

Source:

The ever-growing size of modern space-time data sets, such as those collected by remote sensing, requires new techniques for their efficient and automated processing, including gap-filling of missing values. CUDA-based parallelization on GPU has become a popular way to dramatically increase computational efficiency of various approaches. Recently, we have proposed a computationally efficient and competitive, yet simple spatial prediction approach inspired from statistical physics models, called modified planar rotator (MPR) method. Its GPU implementation allowed additional impressive computational acceleration exceeding two orders of magnitude in comparison with CPU calculations. In the current study we propose a rather general approach to modelling spatial heterogeneity in GPU-implemented spatial prediction methods for two-dimensional gridded data by introducing spatial variability to model parameters. Predictions of unknown values are obtained from non-equilibrium conditional simulations, assuming ``local'' equilibrium conditions. We demonstrate that the proposed method leads to significant improvements in both prediction performance and computational efficiency.

This paper proposes a Kolmogorov-Smirnov type statistic and a Cram\'er-von Mises type statistic to test linearity in semi-functional partially linear regression models. Our test statistics are based on a residual marked empirical process indexed by a randomly projected functional covariate,which is able to circumvent the "curse of dimensionality" brought by the functional covariate. The asymptotic properties of the proposed test statistics under the null, the fixed alternative, and a sequence of local alternatives converging to the null at the $n^{1/2}$ rate are established. A straightforward wild bootstrap procedure is suggested to estimate the critical values that are required to carry out the tests in practical applications. Results from an extensive simulation study show that our tests perform reasonably well in finite samples.Finally, we apply our tests to the Tecator and AEMET datasets to check whether the assumption of linearity is supported by these datasets.

Objective: Voice disorders significantly compromise individuals' ability to speak in their daily lives. Without early diagnosis and treatment, these disorders may deteriorate drastically. Thus, automatic classification systems at home are desirable for people who are inaccessible to clinical disease assessments. However, the performance of such systems may be weakened owing to the constrained resources, and domain mismatch between the clinical data and noisy real-world data. Methods: This study develops a compact and domain-robust voice disorder classification system to identify the utterances of health, neoplasm, and benign structural diseases. Our proposed system utilizes a feature extractor model composed of factorized convolutional neural networks and subsequently deploys domain adversarial training to reconcile the domain mismatch by extracting domain-invariant features. Results: The results show that the unweighted average recall in the noisy real-world domain improved by 13% and remained at 80% in the clinic domain with only slight degradation. The domain mismatch was effectively eliminated. Moreover, the proposed system reduced the usage of both memory and computation by over 73.9%. Conclusion: By deploying factorized convolutional neural networks and domain adversarial training, domain-invariant features can be derived for voice disorder classification with limited resources. The promising results confirm that the proposed system can significantly reduce resource consumption and improve classification accuracy by considering the domain mismatch. Significance: To the best of our knowledge, this is the first study that jointly considers real-world model compression and noise-robustness issues in voice disorder classification. The proposed system is intended for application to embedded systems with limited resources.

Our aim is to develop dynamic data structures that support $k$-nearest neighbors ($k$-NN) queries for a set of $n$ point sites in the plane in $O(f(n) + k)$ time, where $f(n)$ is some polylogarithmic function of $n$. The key component is a general query algorithm that allows us to find the $k$-NN spread over $t$ substructures simultaneously, thus reducing an $O(tk)$ term in the query time to $O(k)$. Combining this technique with the logarithmic method allows us to turn any static $k$-NN data structure into a data structure supporting both efficient insertions and queries. For the fully dynamic case, this technique allows us to recover the deterministic, worst-case, $O(\log^2n/\log\log n +k)$ query time for the Euclidean distance claimed before, while preserving the polylogarithmic update times. We adapt this data structure to also support fully dynamic \emph{geodesic} $k$-NN queries among a set of sites in a simple polygon. For this purpose, we design a shallow cutting based, deletion-only $k$-NN data structure. More generally, we obtain a dynamic planar $k$-NN data structure for any type of distance functions for which we can build vertical shallow cuttings. We apply all of our methods in the plane for the Euclidean distance, the geodesic distance, and general, constant-complexity, algebraic distance functions.

Understanding emerging behaviors of reinforcement learning (RL) agents may be difficult since such agents are often trained in complex environments using highly complex decision making procedures. This has given rise to a variety of approaches to explainability in RL that aim to reconcile discrepancies that may arise between the behavior of an agent and the behavior that is anticipated by an observer. Most recent approaches have relied either on domain knowledge that may not always be available, on an analysis of the agent's policy, or on an analysis of specific elements of the underlying environment, typically modeled as a Markov Decision Process (MDP). Our key claim is that even if the underlying model is not fully known (e.g., the transition probabilities have not been accurately learned) or is not maintained by the agent (i.e., when using model-free methods), the model can nevertheless be exploited to automatically generate explanations. For this purpose, we suggest using formal MDP abstractions and transforms, previously used in the literature for expediting the search for optimal policies, to automatically produce explanations. Since such transforms are typically based on a symbolic representation of the environment, they can provide meaningful explanations for gaps between the anticipated and actual agent behavior. We formally define the explainability problem, suggest a class of transforms that can be used for explaining emergent behaviors, and suggest methods that enable efficient search for an explanation. We demonstrate the approach on a set of standard benchmarks.

We study monitoring of linear-time arithmetic properties against finite traces generated by an unknown dynamic system. The monitoring state is determined by considering at once the trace prefix seen so far, and all its possible finite-length, future continuations. This makes monitoring at least as hard as satisfiability and validity. Traces consist of finite sequences of assignments of a fixed set of variables to numerical values. Properties are specified in a logic we call ALTLf, combining LTLf (LTL on finite traces) with linear arithmetic constraints that may carry lookahead, i.e., variables may be compared over multiple instants of the trace. While the monitoring problem for this setting is undecidable in general, we show decidability for (a) properties without lookahead, and (b) properties with lookahead that satisfy the abstract, semantic condition of finite summary, studied before in the context of model checking. We then single out concrete, practically relevant classes of constraints guaranteeing finite summary. Feasibility is witnessed by a prototype implementation.

The flux-mortar mixed finite element method was recently developed for a general class of domain decomposition saddle point problems on non-matching grids. In this work we develop the method for Darcy flow using the multipoint flux approximation as the subdomain discretization. The subdomain problems involve solving positive definite cell-centered pressure systems. The normal flux on the subdomain interfaces is the mortar coupling variable, which plays the role of a Lagrange multiplier to impose weakly continuity of pressure. We present well-posedness and error analysis based on reformulating the method as a mixed finite element method with a quadrature rule. We develop a non-overlapping domain decomposition algorithm for the solution of the resulting algebraic system that reduces it to an interface problem for the flux-mortar, as well as an efficient interface preconditioner. A series of numerical experiments is presented illustrating the performance of the method on general grids, including applications to flow in complex porous media.

Reinforcement learning is one of the core components in designing an artificial intelligent system emphasizing real-time response. Reinforcement learning influences the system to take actions within an arbitrary environment either having previous knowledge about the environment model or not. In this paper, we present a comprehensive study on Reinforcement Learning focusing on various dimensions including challenges, the recent development of different state-of-the-art techniques, and future directions. The fundamental objective of this paper is to provide a framework for the presentation of available methods of reinforcement learning that is informative enough and simple to follow for the new researchers and academics in this domain considering the latest concerns. First, we illustrated the core techniques of reinforcement learning in an easily understandable and comparable way. Finally, we analyzed and depicted the recent developments in reinforcement learning approaches. My analysis pointed out that most of the models focused on tuning policy values rather than tuning other things in a particular state of reasoning.

With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of long-tailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existing solutions typically adopt class re-balancing strategies such as re-sampling and re-weighting based on the number of observations for each class. In this work, we argue that as the number of samples increases, the additional benefit of a newly added data point will diminish. We introduce a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. The effective number of samples is defined as the volume of samples and can be calculated by a simple formula $(1-\beta^{n})/(1-\beta)$, where $n$ is the number of samples and $\beta \in [0,1)$ is a hyperparameter. We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss. Comprehensive experiments are conducted on artificially induced long-tailed CIFAR datasets and large-scale datasets including ImageNet and iNaturalist. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets.

北京阿比特科技有限公司