亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Topologically interlocked structures are assemblies of interlocking blocks that hold together solely through contact. Such structures have been shown to exhibit high strength, energy dissipation, and crack arrest properties. Recent studies on topologically interlocked structures have shown that both the peak strength and work-to-failure saturate with increasing friction coefficient. However, this saturated structural response is only achievable with nonphysically high values of the friction coefficient. For beam-like topologically interlocked structures, non-planar blocks provide an alternate approach to reach similar structural response with friction properties of commonly used materials. It remains unknown whether non-planar blocks have similar effects for slab-like assemblies, and what the achievable structural properties are. Here, we consider slab-like topologically interlocked structures and show, using numerical simulations, that non-planar blocks with wave-like surfaces allow for saturated response capacity of the structure with a realistic friction coefficient. We further demonstrate that non-planar morphologies cause a non-linear scaling of the work-to-failure with peak strength and result in significant improvements of the work-to-failure and ultimate deflection - values that cannot be attained with planar-faced blocks. Finally, we show that the key morphology parameter responsible for the enhanced performance of non-planar blocks with wave-like surfaces is the local angle of inclination at the hinging points of the loaded block. These findings shed new light on topologically interlocked structures with non-planar blocks, allowing for a better understanding of their strengths and energy absorption.

相關內容

A significant limitation of the LTE-V2X and NR-V2X sidelink scheduling mechanisms is their difficulty coping with variations in inter packet arrival times, also known as aperiodic packets. This conflicts with the fundamental characteristics of most V2X services which are triggered based on an event. e.g. ETSI Cooperative Awareness Messages (CAMs) - vehicle kinematics, Cooperative Perception Messages (CPMs) - object sensing and Decentralised Event Notification Messages (DENMs) - event occurrences. Furthermore, network management techniques such as congestion control mechanisms can result in varied inter packet arrival times. To combat this, NR-V2X introduced a dynamic grant mechanism, which we show is ineffective unless there is background periodic traffic to stabilise the sensing history upon which the scheduler makes it decisions. The characteristics of V2X services make it implausible that such periodic application traffic will exist. To overcome this significant drawback, we demonstrate that the standardised scheduling algorithms can be made effective if the event triggered arrival rate of packets can be accurately predicted. These predictions can be used to tune the Resource Reservation Interval (RRI) parameter of the MAC scheduler to negate the negative impact of aperiodicity. Such an approach allows the scheduler to achieve comparable performance to a scenario where packets arrive periodically. To demonstrate the effectiveness of our approach, an ML model has been devised for the prediction of cooperative awareness messages, but the same principle can be abstracted to other V2X service types.

Many datasets suffer from missing values due to various reasons,which not only increases the processing difficulty of related tasks but also reduces the accuracy of classification. To address this problem, the mainstream approach is to use missing value imputation to complete the dataset. Existing imputation methods estimate the missing parts based on the observed values in the original feature space, and they treat all features as equally important during data completion, while in fact different features have different importance. Therefore, we have designed an imputation method that considers feature importance. This algorithm iteratively performs matrix completion and feature importance learning, and specifically, matrix completion is based on a filling loss that incorporates feature importance. Our experimental analysis involves three types of datasets: synthetic datasets with different noisy features and missing values, real-world datasets with artificially generated missing values, and real-world datasets originally containing missing values. The results on these datasets consistently show that the proposed method outperforms the existing five imputation algorithms.To the best of our knowledge, this is the first work that considers feature importance in the imputation model.

We introduce and analyse a family of hash and predicate functions that are more likely to produce collisions for small reducible configurations of vectors. These may offer practical improvements to lattice sieving for short vectors. In particular, in one asymptotic regime the family exhibits significantly different convergent behaviour than existing hash functions and predicates.

Randomness in the void distribution within a ductile metal complicates quantitative modeling of damage following the void growth to coalescence failure process. Though the sequence of micro-mechanisms leading to ductile failure is known from unit cell models, often based on assumptions of a regular distribution of voids, the effect of randomness remains a challenge. In the present work, mesoscale unit cell models, each containing an ensemble of four voids of equal size that are randomly distributed, are used to find statistical effects on the yield surface of the homogenized material. A yield locus is found based on a mean yield surface and a standard deviation of yield points obtained from 15 realizations of the four-void unit cells. It is found that the classical GTN model very closely agrees with the mean of the yield points extracted from the unit cell calculations with random void distributions, while the standard deviation $\textbf{S}$ varies with the imposed stress state. It is shown that the standard deviation is nearly zero for stress triaxialities $T\leq1/3$, while it rapidly increases for triaxialities above $T\approx 1$, reaching maximum values of about $\textbf{S}/\sigma_0\approx0.1$ at $T \approx 4$. At even higher triaxialities it decreases slightly. The results indicate that the dependence of the standard deviation on the stress state follows from variations in the deformation mechanism since a well-correlated variation is found for the volume fraction of the unit cell that deforms plastically at yield. Thus, the random void distribution activates different complex localization mechanisms at high stress triaxialities that differ from the ligament thinning mechanism forming the basis for the classical GTN model. A method for introducing the effect of randomness into the GTN continuum model is presented, and an excellent comparison to the unit cell yield locus is achieved.

We address the problem of testing conditional mean and conditional variance for non-stationary data. We build e-values and p-values for four types of non-parametric composite hypotheses with specified mean and variance as well as other conditions on the shape of the data-generating distribution. These shape conditions include symmetry, unimodality, and their combination. Using the obtained e-values and p-values, we construct tests via e-processes also known as testing by betting, as well as tests based on combining p-values. Simulation and empirical studies are conducted for a few settings of the null hypotheses, and they show that methods based on e-processes are efficient.

Inference for functional linear models in the presence of heteroscedastic errors has received insufficient attention given its practical importance; in fact, even a central limit theorem has not been studied in this case. At issue, conditional mean (projection of the slope function) estimates have complicated sampling distributions due to the infinite dimensional regressors, which create truncation bias and scaling problems that are compounded by non-constant variance under heteroscedasticity. As a foundation for distributional inference, we establish a central limit theorem for the estimated projection under general dependent errors, and subsequently we develop a paired bootstrap method to approximate sampling distributions. The proposed paired bootstrap does not follow the standard bootstrap algorithm for finite dimensional regressors, as this version fails outside of a narrow window for implementation with functional regressors. The reason owes to a bias with functional regressors in a naive bootstrap construction. Our bootstrap proposal incorporates debiasing and thereby attains much broader validity and flexibility with truncation parameters for inference under heteroscedasticity; even when the naive approach may be valid, the proposed bootstrap method performs better numerically. The bootstrap is applied to construct confidence intervals for projections and for conducting hypothesis tests for the slope function. Our theoretical results on bootstrap consistency are demonstrated through simulation studies and also illustrated with real data examples.

We investigate the spectrum of differentiation matrices for certain operators on the sphere that are generated from collocation at a set of scattered points $X$ with positive definite and conditionally positive definite kernels. We focus on the case where these matrices are constructed from collocation using all the points in $X$ and from local subsets of points (or stencils) in $X$. The former case is often referred to as the global, Kansa, or pseudospectral method, while the latter is referred to as the local radial basis function (RBF) finite difference (RBF-FD) method. Both techniques are used extensively for numerically solving certain partial differential equations (PDEs) on spheres (and other domains). For time-dependent PDEs on spheres like the (surface) diffusion equation, the spectrum of the differentiation matrices and their stability under perturbations are central to understanding the temporal stability of the underlying numerical schemes. In the global case, we present a perturbation estimate for differentiation matrices which discretize operators that commute with the Laplace-Beltrami operator. In doing so, we demonstrate that if such an operator has negative (non-positive) spectrum, then the differentiation matrix does, too (i.e., it is Hurwitz stable). For conditionally positive definite kernels this is particularly challenging since the differentiation matrices are not necessarily diagonalizable. This perturbation theory is then used to obtain bounds on the spectra of the local RBF-FD differentiation matrices based on the conditionally positive definite surface spline kernels. Numerical results are presented to confirm the theoretical estimates.

Neural network models of language have long been used as a tool for developing hypotheses about conceptual representation in the mind and brain. For many years, such use involved extracting vector-space representations of words and using distances among these to predict or understand human behavior in various semantic tasks. Contemporary large language models (LLMs), however, make it possible to interrogate the latent structure of conceptual representations using experimental methods nearly identical to those commonly used with human participants. The current work utilizes three common techniques borrowed from cognitive psychology to estimate and compare the structure of concepts in humans and a suite of LLMs. In humans, we show that conceptual structure is robust to differences in culture, language, and method of estimation. Structures estimated from LLM behavior, while individually fairly consistent with those estimated from human behavior, vary much more depending upon the particular task used to generate responses--across tasks, estimates of conceptual structure from the very same model cohere less with one another than do human structure estimates. These results highlight an important difference between contemporary LLMs and human cognition, with implications for understanding some fundamental limitations of contemporary machine language.

Large datasets are often affected by cell-wise outliers in the form of missing or erroneous data. However, discarding any samples containing outliers may result in a dataset that is too small to accurately estimate the covariance matrix. Moreover, the robust procedures designed to address this problem require the invertibility of the covariance operator and thus are not effective on high-dimensional data. In this paper, we propose an unbiased estimator for the covariance in the presence of missing values that does not require any imputation step and still achieves near minimax statistical accuracy with the operator norm. We also advocate for its use in combination with cell-wise outlier detection methods to tackle cell-wise contamination in a high-dimensional and low-rank setting, where state-of-the-art methods may suffer from numerical instability and long computation times. To complement our theoretical findings, we conducted an experimental study which demonstrates the superiority of our approach over the state of the art both in low and high dimension settings.

The development of cubical type theory inspired the idea of "extension types" which has been found to have applications in other type theories that are unrelated to homotopy type theory or cubical type theory. This article describes these applications, including on records, metaprogramming, controlling unfolding, and some more exotic ones.

北京阿比特科技有限公司