亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Reduced-order models have been widely adopted in fluid mechanics, particularly in the context of Newtonian fluid flows. These models offer the ability to predict complex dynamics, such as instabilities and oscillations, at a considerably reduced computational cost. In contrast, the reduced-order modeling of non-Newtonian viscoelastic fluid flows remains relatively unexplored. This work leverages the sparse identification of nonlinear dynamics (SINDy) algorithm to develop interpretable reduced-order models for a broad class of viscoelastic flows. In particular, we explore a benchmark oscillatory viscoelastic flow on the four-roll mill geometry using the classical Oldroyd-B fluid. This flow exemplifies many canonical challenges associated with non-Newtonian flows, including transitions, asymmetries, instabilities, and bifurcations arising from the interplay of viscous and elastic forces, all of which require expensive computations in order to resolve the fast timescales and long transients characteristic of such flows. First, we demonstrate the effectiveness of our data-driven surrogate model in predicting the transient evolution on a simplified representation of the dynamical system. We then describe the ability of the reduced-order model to accurately reconstruct spatial flow field in a basis obtained via proper orthogonal decomposition. Finally, we develop a fully parametric, nonlinear model that captures the dominant variations of the dynamics with the relevant nondimensional Weissenberg number. This work illustrates the potential to reduce computational costs and improve design, optimization, and control of a large class of non-Newtonian fluid flows with modern machine learning and reduced-order modeling techniques.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · · 有向非循環圖 · 極小點 · DAG ·
2023 年 9 月 28 日

We develop lower bounds on communication in the memory hierarchy or between processors for nested bilinear algorithms, such as Strassen's algorithm for matrix multiplication. We build on a previous framework that establishes communication lower bounds by use of the rank expansion, or the minimum rank of any fixed size subset of columns of a matrix, for each of the three matrices encoding a bilinear algorithm. This framework provides lower bounds for a class of dependency directed acyclic graphs (DAGs) corresponding to the execution of a given bilinear algorithm, in contrast to other approaches that yield bounds for specific DAGs. However, our lower bounds only apply to executions that do not compute the same DAG node multiple times. Two bilinear algorithms can be nested by taking Kronecker products between their encoding matrices. Our main result is a lower bound on the rank expansion of a matrix constructed by a Kronecker product derived from lower bounds on the rank expansion of the Kronecker product's operands. We apply the rank expansion lower bounds to obtain novel communication lower bounds for nested Toom-Cook convolution, Strassen's algorithm, and fast algorithms for contraction of partially symmetric tensors.

We consider a kinetic model of an N-species gas mixture modeled with quantum Bhatnagar-Gross-Krook (BGK) collision operators. The collision operators consist of a relaxation to a Maxwell distribution in the classical case, a Fermi distribution for fermions and a Bose-Einstein distribution for bosons. In this paper we present a numerical method for simulating this model, which uses an Implicit-Explicit (IMEX) scheme to minimize a certain potential function. This is motivated by theoretical considerations coming from entropy minimization. We show that theoretical properties such as conservation of mass, total momentum and total energy as well as positivity of the distribution functions are preserved by the numerical method presented in this paper, and illustrate its usefulness and effectiveness with numerical examples

We present ReCAT, a recursive composition augmented Transformer that is able to explicitly model hierarchical syntactic structures of raw texts without relying on gold trees during both learning and inference. Existing research along this line restricts data to follow a hierarchical tree structure and thus lacks inter-span communications. To overcome the problem, we propose a novel contextual inside-outside (CIO) layer that learns contextualized representations of spans through bottom-up and top-down passes, where a bottom-up pass forms representations of high-level spans by composing low-level spans, while a top-down pass combines information inside and outside a span. By stacking several CIO layers between the embedding layer and the attention layers in Transformer, the ReCAT model can perform both deep intra-span and deep inter-span interactions, and thus generate multi-grained representations fully contextualized with other spans. Moreover, the CIO layers can be jointly pre-trained with Transformers, making ReCAT enjoy scaling ability, strong performance, and interpretability at the same time. We conduct experiments on various sentence-level and span-level tasks. Evaluation results indicate that ReCAT can significantly outperform vanilla Transformer models on all span-level tasks and baselines that combine recursive networks with Transformers on natural language inference tasks. More interestingly, the hierarchical structures induced by ReCAT exhibit strong consistency with human-annotated syntactic trees, indicating good interpretability brought by the CIO layers.

We investigate the product structure of hereditary graph classes admitting strongly sublinear separators. We characterise such classes as subgraphs of the strong product of a star and a complete graph of strongly sublinear size. In a more precise result, we show that if any hereditary graph class $\mathcal{G}$ admits $O(n^{1-\epsilon})$ separators, then for any fixed $\delta\in(0,\epsilon)$ every $n$-vertex graph in $\mathcal{G}$ is a subgraph of the strong product of a graph $H$ with bounded tree-depth and a complete graph of size $O(n^{1-\epsilon+\delta})$. This result holds with $\delta=0$ if we allow $H$ to have tree-depth $O(\log\log n)$. Moreover, using extensions of classical isoperimetric inequalties for grids graphs, we show the dependence on $\delta$ in our results and the above $\text{td}(H)\in O(\log\log n)$ bound are both best possible. We prove that $n$-vertex graphs of bounded treewidth are subgraphs of the product of a graph with tree-depth $t$ and a complete graph of size $O(n^{1/t})$, which is best possible. Finally, we investigate the conjecture that for any hereditary graph class $\mathcal{G}$ that admits $O(n^{1-\epsilon})$ separators, every $n$-vertex graph in $\mathcal{G}$ is a subgraph of the strong product of a graph $H$ with bounded tree-width and a complete graph of size $O(n^{1-\epsilon})$. We prove this for various classes $\mathcal{G}$ of interest.

Mini-batch stochastic gradient descent (SGD) and variants thereof approximate the objective function's gradient with a small number of training examples, aka the batch size. Small batch sizes require little computation for each model update but can yield high-variance gradient estimates, which poses some challenges for optimization. Conversely, large batches require more computation but can yield higher precision gradient estimates. This work presents a method to adapt the batch size to the model's training loss. For various function classes, we show that our method requires the same order of model updates as gradient descent while requiring the same order of gradient computations as SGD. This method requires evaluating the model's loss on the entire dataset every model update. However, the required computation is greatly reduced by approximating the training loss. We provide experiments that illustrate our methods require fewer model updates without increasing the total amount of computation.

Understanding the interactions of a solute with its environment is of fundamental importance in chemistry and biology. In this work, we propose a deep neural network architecture for atom type embeddings in its molecular context and interatomic potential that follows fundamental physical laws. The architecture is applied to predict physicochemical properties in heterogeneous systems including solvation in diverse solvents, 1-octanol-water partitioning, and PAMPA with a single set of network weights. We show that our architecture is generalized well to the physicochemical properties and outperforms state-of-the-art approaches based on quantum mechanics and neural networks in the task of solvation free energy prediction. The interatomic potentials at each atom in a solute obtained from the model allow quantitative analysis of the physicochemical properties at atomic resolution consistent with chemical and physical reasoning. The software is available at //github.com/SehanLee/C3Net.

We generalise a hybridized discontinuous Galerkin method for incompressible flow problems to non-affine cells, showing that with a suitable element mapping the generalised method preserves a key invariance property that eludes most methods, namely that any irrotational component of the prescribed force is exactly balanced by the pressure gradient and does not affect the velocity field. This invariance property can be preserved in the discrete problem if the incompressibility constraint is satisfied in a sufficiently strong sense. We derive sufficient conditions to guarantee discretely divergence-free functions are exactly divergence-free and give examples of divergence-free finite elements on meshes with triangular, quadrilateral, tetrahedral, or hexahedral cells generated by a (possibly non-affine) map from their respective reference cells. In the case of quadrilateral cells, we prove an optimal error estimate for the velocity field that does not depend on the pressure approximation. Our analysis is supported by numerical results.

In the area of query complexity of Boolean functions, the most widely studied cost measure of an algorithm is the worst-case number of queries made by it on an input. Motivated by the most natural cost measure studied in online algorithms, the competitive ratio, we consider a different cost measure for query algorithms for Boolean functions that captures the ratio of the cost of the algorithm and the cost of an optimal algorithm that knows the input in advance. The cost of an algorithm is its largest cost over all inputs. Grossman, Komargodski and Naor [ITCS'20] introduced this measure for Boolean functions, and dubbed it instance complexity. Grossman et al. showed, among other results, that monotone Boolean functions with instance complexity 1 are precisely those that depend on one or two variables. We complement the above-mentioned result of Grossman et al. by completely characterizing the instance complexity of symmetric Boolean functions. As a corollary we conclude that the only symmetric Boolean functions with instance complexity 1 are the Parity function and its complement. We also study the instance complexity of some graph properties like Connectivity and k-clique containment. In all the Boolean functions we study above, and those studied by Grossman et al., the instance complexity turns out to be the ratio of query complexity to minimum certificate complexity. It is a natural question to ask if this is the correct bound for all Boolean functions. We show a negative answer in a very strong sense, by analyzing the instance complexity of the Greater-Than and Odd-Max-Bit functions. We show that the above-mentioned ratio is linear in the input size for both of these functions, while we exhibit algorithms for which the instance complexity is a constant.

We address the classical inverse problem of recovering the position and shape of obstacles immersed in a planar Stokes flow using boundary measurements. We prove that this problem can be transformed into a shape-from-moments problem to which ad hoc reconstruction methods can be applied. The effectiveness of this approach is confirmed by numerical tests that show significant improvements over those available in the literature to date.

Probabilistic (Bayesian) modeling has experienced a surge of applications in almost all quantitative sciences and industrial areas. This development is driven by a combination of several factors, including better probabilistic estimation algorithms, flexible software, increased computing power, and a growing awareness of the benefits of probabilistic learning. However, a principled Bayesian model building workflow is far from complete and many challenges remain. To aid future research and applications of a principled Bayesian workflow, we ask and provide answers for what we perceive as two fundamental questions of Bayesian modeling, namely (a) "What actually is a Bayesian model?" and (b) "What makes a good Bayesian model?". As an answer to the first question, we propose the PAD model taxonomy that defines four basic kinds of Bayesian models, each representing some combination of the assumed joint distribution of all (known or unknown) variables (P), a posterior approximator (A), and training data (D). As an answer to the second question, we propose ten utility dimensions according to which we can evaluate Bayesian models holistically, namely, (1) causal consistency, (2) parameter recoverability, (3) predictive performance, (4) fairness, (5) structural faithfulness, (6) parsimony, (7) interpretability, (8) convergence, (9) estimation speed, and (10) robustness. Further, we propose two example utility decision trees that describe hierarchies and trade-offs between utilities depending on the inferential goals that drive model building and testing.

北京阿比特科技有限公司