亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper introduces an approach to improve volume conservation in the immersed boundary (IB) method using regularized delta functions derived from composite B-splines. These delta functions employ tensor product kernels using B-splines, whose polynomial degrees vary in normal and tangential directions based on the corresponding velocity component. Our method addresses the long-standing volume conservation issues in the conventional IB method, particularly evident in simulations of pressurized, closed membranes. We demonstrate that our approach significantly enhances volume conservation, rivaling the performance of the non-local Divergence-Free Immersed Boundary (DFIB) method introduced by Bao et al. while maintaining the local nature of the classical IB method. This avoids the computational overhead associated with the DFIB method's construction of an explicit velocity potential which requires additional Poisson solves. Numerical experiments show that sufficiently regular composite B-spline kernels can maintain initial volumes to within machine precision. We analyze the relationship between kernel regularity and the accuracy of force spreading and velocity interpolation operations. Our findings indicate that composite B-splines of at least $C^1$ regularity produce results comparable to the DFIB method in dynamic simulations, with volume conservation errors primarily dominated by the time-stepping scheme's truncation error. This work offers a computationally efficient alternative for improving volume conservation in IB methods, particularly beneficial for large-scale, three-dimensional simulations. The proposed approach requires minimal modifications to an existing IB code, making it an accessible improvement for a wide range of applications in computational fluid dynamics and fluid-structure interaction.

相關內容

This paper proposes a variance-based measure of importance for coherent systems with dependent and heterogeneous components. The particular cases of independent components and homogeneous components are also considered. We model the dependence structure among the components by the concept of copula. The proposed measure allows us to provide the best estimation of the system lifetime, in terms of the mean squared error, under the assumption that the lifetime of one of its components is known. We include theoretical results that are useful to calculate a closed-form of our measure and to compare two components of a system. We also provide some procedures to approximate the importance measure by Monte Carlo simulation methods. Finally, we illustrate the main results with several examples.

Given a finite set of matrices with integer entries, the matrix mortality problem asks if there exists a product of these matrices equal to the zero matrix. We consider a special case of this problem where all entries of the matrices are nonnegative. This case is equivalent to the NFA mortality problem, which, given an NFA, asks for a word $w$ such that the image of every state under $w$ is the empty set. The size of the alphabet of the NFA is then equal to the number of matrices in the set. We study the length of shortest such words depending on the size of the alphabet. We show that for an NFA with $n$ states this length can be at least $2^n - 1$ for an alphabet of size $n$, $2^{(n - 4)/2}$ for an alphabet of size $3$ and $2^{(n - 2)/3}$ for an alphabet of size $2$. We also discuss further open problems related to mortality of NFAs and DFAs.

The sample compression theory provides generalization guarantees for predictors that can be fully defined using a subset of the training dataset and a (short) message string, generally defined as a binary sequence. Previous works provided generalization bounds for the zero-one loss, which is restrictive, notably when applied to deep learning approaches. In this paper, we present a general framework for deriving new sample compression bounds that hold for real-valued losses. We empirically demonstrate the tightness of the bounds and their versatility by evaluating them on different types of models, e.g., neural networks and decision forests, trained with the Pick-To-Learn (P2L) meta-algorithm, which transforms the training method of any machine-learning predictor to yield sample-compressed predictors. In contrast to existing P2L bounds, ours are valid in the non-consistent case.

Spatial variables can be observed in many different forms, such as regularly sampled random fields (lattice data), point processes, and randomly sampled spatial processes. Joint analysis of such collections of observations is clearly desirable, but complicated by the lack of an easily implementable analysis framework. It is well known that Fourier transforms provide such a framework, but its form has eluded data analysts. We formalize it by providing a multitaper analysis framework using coupled discrete and continuous data tapers, combined with the discrete Fourier transform for inference. Using this set of tools is important, as it forms the backbone for practical spectral analysis. In higher dimensions it is important not to be constrained to Cartesian product domains, and so we develop the methodology for spectral analysis using irregular domain data tapers, and the tapered discrete Fourier transform. We discuss its fast implementation, and the asymptotic as well as large finite domain properties. Estimators of partial association between different spatial processes are provided as are principled methods to determine their significance, and we demonstrate their practical utility on a large-scale ecological dataset.

LLM text decoding is key component for perceived LLM quality. We demonstrate two experiments showing that decoding methods could be improved by manipulation of token probabilities. First, we test few LLM on SummEval summary scoring dataset, to measure reading comprehension. We compare scores from greedy decoding to expected values over the next token distribution. We scale logits by large temperature to increase the entropy of scores. This allows strong improvement of performance on SummEval (in terms of correlations to human judgement). We see improvement from 6-8% to 13-28% for 7B Mistral and from 20%-46% to 37%-56% for Mixtral, beating GPT 4 0314 result on two metrics. Part of the gain seems related to positional bias. Secondly, we use probability-based tree sampling algorithm, to examine all most probable generations for given prompt.

Completely randomized experiment is the gold standard for causal inference. When the covariate information for each experimental candidate is available, one typical way is to include them in covariate adjustments for more accurate treatment effect estimation. In this paper, we investigate this problem under the randomization-based framework, i.e., that the covariates and potential outcomes of all experimental candidates are assumed as deterministic quantities and the randomness comes solely from the treatment assignment mechanism. Under this framework, to achieve asymptotically valid inference, existing estimators usually require either (i) that the dimension of covariates $p$ grows at a rate no faster than $O(n^{3 / 4})$ as sample size $n \to \infty$; or (ii) certain sparsity constraints on the linear representations of potential outcomes constructed via possibly high-dimensional covariates. In this paper, we consider the moderately high-dimensional regime where $p$ is allowed to be in the same order of magnitude as $n$. We develop a novel debiased estimator with a corresponding inference procedure and establish its asymptotic normality under mild assumptions. Our estimator is model-free and does not require any sparsity constraint on potential outcome's linear representations. We also discuss its asymptotic efficiency improvements over the unadjusted treatment effect estimator under different dimensionality constraints. Numerical analysis confirms that compared to other regression adjustment based treatment effect estimators, our debiased estimator performs well in moderately high dimensions.

In this paper, we introduce and analyze a mixed formulation for the Oseen eigenvalue problem by introducing the pseudostress tensor as a new unknown, allowing us to eliminate the fluid pressure. The well-posedness of the solution operator is established using a fixed-point argument. For the numerical analysis, we use the tensorial versions of Raviart-Thomas and Brezzi-Douglas-Marini elements to approximate the pseudostress, and piecewise polynomials for the velocity. Convergence and a priori error estimates are derived based on compact operator theory. We present a series of numerical tests in two and three dimensions to confirm the theoretical findings.

This paper extends the optimal-trading framework developed in arXiv:2409.03586v1 to compute optimal strategies with real-world constraints. The aim of the current paper, as with the previous, is to study trading in the context of multi-player non-cooperative games. While the former paper relies on methods from the calculus of variations and optimal strategies arise as the solution of partial differential equations, the current paper demonstrates that the entire framework may be re-framed as a quadratic programming problem and cast in this light constraints are readily incorporated into the calculation of optimal strategies. An added benefit is that two-trader equilibria may be calculated as the end-points of a dynamic process of traders forming repeated adjustments to each other's strategy.

This paper focuses on the construction of accurate and predictive data-driven reduced models of large-scale numerical simulations with complex dynamics and sparse training datasets. In these settings, standard, single-domain approaches may be too inaccurate or may overfit and hence generalize poorly. Moreover, processing large-scale datasets typically requires significant memory and computing resources which can render single-domain approaches computationally prohibitive. To address these challenges, we introduce a domain decomposition formulation into the construction of a data-driven reduced model. In doing so, the basis functions used in the reduced model approximation become localized in space, which can increase the accuracy of the domain-decomposed approximation of the complex dynamics. The decomposition furthermore reduces the memory and computing requirements to process the underlying large-scale training dataset. We demonstrate the effectiveness and scalability of our approach in a large-scale three-dimensional unsteady rotating detonation rocket engine simulation scenario with over $75$ million degrees of freedom and a sparse training dataset. Our results show that compared to the single-domain approach, the domain-decomposed version reduces both the training and prediction errors for pressure by up to $13 \%$ and up to $5\%$ for other key quantities, such as temperature, and fuel and oxidizer mass fractions. Lastly, our approach decreases the memory requirements for processing by almost a factor of four, which in turn reduces the computing requirements as well.

This paper deals with nonlinear mechanics of an elevator brake system subjected to uncertainties. A deterministic model that relates the braking force with uncertain parameters is deduced from mechanical equilibrium conditions. In order to take into account parameters variabilities, a parametric probabilistic approach is employed. In this stochastic formalism, the uncertain parameters are modeled as random variables, with distributions specified by the maximum entropy principle. The uncertainties are propagated by the Monte Carlo method, which provides a detailed statistical characterization of the response. This work still considers the optimum design of the brake system, formulating and solving nonlinear optimization problems, with and without the uncertainties effects.

北京阿比特科技有限公司