亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We continue our investigation of finite deformation linear viscoelastodynamics by focusing on constructing accurate and reliable numerical schemes. The concrete thermomechanical foundation developed in the previous study paves the way for pursuing discrete formulations with critical physical and mathematical structures preserved. Energy stability, momentum conservation, and temporal accuracy constitute the primary factors in our algorithm design. For inelastic materials, the directionality condition, a property for the stress to be energy consistent, is extended with the dissipation effect taken into account. Moreover, the integration of the constitutive relations calls for an algorithm design of the internal state variables and their conjugate variables. A directionality condition for the conjugate variables is introduced as an indispensable ingredient for ensuring physically correct numerical dissipation. By leveraging the particular structure of the configurational free energy, a set of update formulas for the internal state variables is obtained. Detailed analysis reveals that the overall discrete schemes are energy-momentum consistent and achieve first- and second-order accuracy in time, respectively. Numerical examples are provided to justify the appealing features of the proposed methodology.

相關內容

Automata operating on infinite objects feature prominently in the theory of the modal $\mu$-calculus. One such application concerns the tableau games introduced by Niwi\'{n}ski & Walukiewicz, of which the winning condition for infinite plays can be naturally checked by a nondeterministic parity stream automaton. Inspired by work of Jungteerapanich and Stirling we show how determinization constructions of this automaton may be used to directly obtain proof systems for the $\mu$-calculus. More concretely, we introduce a binary tree construction for determinizing nondeterministic parity stream automata. Using this construction we define the annotated cyclic proof system $\mathsf{BT}$, where formulas are annotated by tuples of binary strings. Soundness and Completeness of this system follow almost immediately from the correctness of the determinization method.

Analysis of high-dimensional data, where the number of covariates is larger than the sample size, is a topic of current interest. In such settings, an important goal is to estimate the signal level $\tau^2$ and noise level $\sigma^2$, i.e., to quantify how much variation in the response variable can be explained by the covariates, versus how much of the variation is left unexplained. This thesis considers the estimation of these quantities in a semi-supervised setting, where for many observations only the vector of covariates $X$ is given with no responses $Y$. Our main research question is: how can one use the unlabeled data to better estimate $\tau^2$ and $\sigma^2$? We consider two frameworks: a linear regression model and a linear projection model in which linearity is not assumed. In the first framework, while linear regression is used, no sparsity assumptions on the coefficients are made. In the second framework, the linearity assumption is also relaxed and we aim to estimate the signal and noise levels defined by the linear projection. We first propose a naive estimator which is unbiased and consistent, under some assumptions, in both frameworks. We then show how the naive estimator can be improved by using zero-estimators, where a zero-estimator is a statistic arising from the unlabeled data, whose expected value is zero. In the first framework, we calculate the optimal zero-estimator improvement and discuss ways to approximate the optimal improvement. In the second framework, such optimality does no longer hold and we suggest two zero-estimators that improve the naive estimator although not necessarily optimally. Furthermore, we show that our approach reduces the variance for general initial estimators and we present an algorithm that potentially improves any initial estimator. Lastly, we consider four datasets and study the performance of our suggested methods.

Diabetic neuropathy is a disorder characterized by impaired nerve function and reduction of the number of epidermal nerve fibers per epidermal surface. Additionally, as neuropathy related nerve fiber loss and regrowth progresses over time, the two-dimensional spatial arrangement of the nerves becomes more clustered. These observations suggest that with development of neuropathy, the spatial pattern of diminished skin innervation is defined by a thinning process which remains incompletely characterized. We regard samples obtained from healthy controls and subjects suffering from diabetic neuropathy as realisations of planar point processes consisting of nerve entry points and nerve endings, and propose point process models based on spatial thinning to describe the change as neuropathy advances. Initially, the hypothesis that the nerve removal occurs completely at random is tested using independent random thinning of healthy patterns. Then, a dependent parametric thinning model that favors the removal of isolated nerve trees is proposed. Approximate Bayesian computation is used to infer the distribution of the model parameters, and the goodness-of-fit of the models is evaluated using both non-spatial and spatial summary statistics. Our findings suggest that the nerve mortality process changes behaviour as neuropathy advances.

Entropic risk (ERisk) is an established risk measure in finance, quantifying risk by an exponential re-weighting of rewards. We study ERisk for the first time in the context of turn-based stochastic games with the total reward objective. This gives rise to an objective function that demands the control of systems in a risk-averse manner. We show that the resulting games are determined and, in particular, admit optimal memoryless deterministic strategies. This contrasts risk measures that previously have been considered in the special case of Markov decision processes and that require randomization and/or memory. We provide several results on the decidability and the computational complexity of the threshold problem, i.e. whether the optimal value of ERisk exceeds a given threshold. In the most general case, the problem is decidable subject to Shanuel's conjecture. If all inputs are rational, the resulting threshold problem can be solved using algebraic numbers, leading to decidability via a polynomial-time reduction to the existential theory of the reals. Further restrictions on the encoding of the input allow the solution of the threshold problem in NP$\cap$coNP. Finally, an approximation algorithm for the optimal value of ERisk is provided.

In this work, we propose a simple yet generic preconditioned Krylov subspace method for a large class of nonsymmetric block Toeplitz all-at-once systems arising from discretizing evolutionary partial differential equations. Namely, our main result is to propose two novel symmetric positive definite preconditioners, which can be efficiently diagonalized by the discrete sine transform matrix. More specifically, our approach is to first permute the original linear system to obtain a symmetric one, and subsequently develop desired preconditioners based on the spectral symbol of the modified matrix. Then, we show that the eigenvalues of the preconditioned matrix sequences are clustered around $\pm 1$, which entails rapid convergence when the minimal residual method is devised. Alternatively, when the conjugate gradient method on the normal equations is used, we show that our preconditioner is effective in the sense that the eigenvalues of the preconditioned matrix sequence are clustered around unity. An extension of our proposed preconditioned method is given for high-order backward difference time discretization schemes, which can be applied on a wide range of time-dependent equations. Numerical examples are given, also in the variable-coefficient setting, to demonstrate the effectiveness of our proposed preconditioners, which consistently outperforms an existing block circulant preconditioner discussed in the relevant literature.

This study investigates a class of initial-boundary value problems pertaining to the time-fractional mixed sub-diffusion and diffusion-wave equation (SDDWE). To facilitate the development of a numerical method and analysis, the original problem is transformed into a new integro-differential model which includes the Caputo derivatives and the Riemann-Liouville fractional integrals with orders belonging to (0,1). By providing an a priori estimate of the solution, we have established the existence and uniqueness of a numerical solution for the problem. We propose a second-order method to approximate the fractional Riemann-Liouville integral and employ an L2 type formula to approximate the Caputo derivative. This results in a method with a temporal accuracy of second-order for approximating the considered model. The proof of the unconditional stability of the proposed difference scheme is established. Moreover, we demonstrate the proposed method's potential to construct and analyze a second-order L2-type numerical scheme for a broader class of the time-fractional mixed SDDWEs with multi-term time-fractional derivatives. Numerical results are presented to assess the accuracy of the method and validate the theoretical findings.

The state of the art related to parameter correlation in two-parameter models has been reviewed in this paper. The apparent contradictions between the different authors regarding the ability of D--optimality to simultaneously reduce the correlation and the area of the confidence ellipse in two-parameter models were analyzed. Two main approaches were found: 1) those who consider that the optimality criteria simultaneously control the precision and correlation of the parameter estimators; and 2) those that consider a combination of criteria to achieve the same objective. An analytical criterion combining in its structure both the optimality of the precision of the estimators of the parameters and the reduction of the correlation between their estimators is provided. The criterion was tested both in a simple linear regression model, considering all possible design spaces, and in a non-linear model with strong correlation of the estimators of the parameters (Michaelis--Menten) to show its performance. This criterion showed a superior behavior to all the strategies and criteria to control at the same time the precision and the correlation.

A fundamental question asked in modal logic is whether a given theory is consistent. But consistent with what? A typical way to address this question identifies a choice of background knowledge axioms (say, S4, D, etc.) and then shows the assumptions codified by the theory in question to be consistent with those background axioms. But determining the specific choice and division of background axioms is, at least sometimes, little more than tradition. This paper introduces **generic theories** for propositional modal logic to address consistency results in a more robust way. As building blocks for background knowledge, generic theories provide a standard for categorical determinations of consistency. We argue that the results and methods of this paper help to elucidate problems in epistemology and enjoy sufficient scope and power to have purchase on problems bearing on modalities in judgement, inference, and decision making.

Current research in the computer vision field mainly focuses on improving Deep Learning (DL) correctness and inference time performance. However, there is still little work on the huge carbon footprint that has training DL models. This study aims to analyze the impact of the model architecture and training environment when training greener computer vision models. We divide this goal into two research questions. First, we analyze the effects of model architecture on achieving greener models while keeping correctness at optimal levels. Second, we study the influence of the training environment on producing greener models. To investigate these relationships, we collect multiple metrics related to energy efficiency and model correctness during the models' training. Then, we outline the trade-offs between the measured energy efficiency and the models' correctness regarding model architecture, and their relationship with the training environment. We conduct this research in the context of a computer vision system for image classification. In conclusion, we show that selecting the proper model architecture and training environment can reduce energy consumption dramatically (up to 98.83\%) at the cost of negligible decreases in correctness. Also, we find evidence that GPUs should scale with the models' computational complexity for better energy efficiency.

Recently, graph neural networks have been gaining a lot of attention to simulate dynamical systems due to their inductive nature leading to zero-shot generalizability. Similarly, physics-informed inductive biases in deep-learning frameworks have been shown to give superior performance in learning the dynamics of physical systems. There is a growing volume of literature that attempts to combine these two approaches. Here, we evaluate the performance of thirteen different graph neural networks, namely, Hamiltonian and Lagrangian graph neural networks, graph neural ODE, and their variants with explicit constraints and different architectures. We briefly explain the theoretical formulation highlighting the similarities and differences in the inductive biases and graph architecture of these systems. We evaluate these models on spring, pendulum, gravitational, and 3D deformable solid systems to compare the performance in terms of rollout error, conserved quantities such as energy and momentum, and generalizability to unseen system sizes. Our study demonstrates that GNNs with additional inductive biases, such as explicit constraints and decoupling of kinetic and potential energies, exhibit significantly enhanced performance. Further, all the physics-informed GNNs exhibit zero-shot generalizability to system sizes an order of magnitude larger than the training system, thus providing a promising route to simulate large-scale realistic systems.

北京阿比特科技有限公司