亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Mechanical metamaterials are artificially engineered microstructures that exhibit novel mechanical behavior on the macroscopic scale. Active metamaterials can be externally controlled. Pneumatically actuated metamaterials can change their mechanical, acoustic, or other types of effective behavior in response to applied pressure with possible applications ranging from soft robotic actuators to phononic crystals. To facilitate the design of such pneumatically actuated metamaterials and structures by topology optimization, a robust way of their computational modeling, capturing both pneumatic actuation of internal voids and internal contact, is needed. Since voids in topology optimization are often modeled using a soft material model, the third medium contact formulation lends itself as a suitable stepping stone. We propose a single hyperelastic material model capable of maintaining a prescribed hydrostatic Cauchy stress within a void in the pre-contact phase while simultaneously acting as a third medium to enforce frictionless contact, contrasting existing third medium approaches focused solely on contact. We split the overall third-medium energy density into contact, regularization, and pneumatic pressure contributions, all of which can be individually controlled and tuned. To prevent distortions of the compliant third medium, we include curvature penalization in our model. This improves on existing formulations in terms of compliant third medium behavior, leading ultimately to better numerical stability of the solution. Since our formulation is energetically consistent, we are able to employ more advanced finite element solvers, such as the modified Cholesky algorithm to detect instabilities. We demonstrate the behavior of the proposed formulation on several examples of traditional contact benchmarks, including a standard patch test, and validate it with experimental measurement.

相關內容

Change detection is a fundamental task in computer vision that processes a bi-temporal image pair to differentiate between semantically altered and unaltered regions. Large language models (LLMs) have been utilized in various domains for their exceptional feature extraction capabilities and have shown promise in numerous downstream applications. In this study, we harness the power of a pre-trained LLM, extracting feature maps from extensive datasets, and employ an auxiliary network to detect changes. Unlike existing LLM-based change detection methods that solely focus on deriving high-quality feature maps, our approach emphasizes the manipulation of these feature maps to enhance semantic relevance.

We introduce a computationally efficient algorithm for zeroth-order bandit convex optimisation and prove that in the adversarial setting its regret is at most $d^{3.5} \sqrt{n} \mathrm{polylog}(n, d)$ with high probability where $d$ is the dimension and $n$ is the time horizon. In the stochastic setting the bound improves to $M d^{2} \sqrt{n} \mathrm{polylog}(n, d)$ where $M \in [d^{-1/2}, d^{-1 / 4}]$ is a constant that depends on the geometry of the constraint set and the desired computational properties.

3D Gaussian Splatting (3DGS) has recently revolutionized radiance field reconstruction, achieving high quality novel view synthesis and fast rendering speed without baking. However, 3DGS fails to accurately represent surfaces due to the multi-view inconsistent nature of 3D Gaussians. We present 2D Gaussian Splatting (2DGS), a novel approach to model and reconstruct geometrically accurate radiance fields from multi-view images. Our key idea is to collapse the 3D volume into a set of 2D oriented planar Gaussian disks. Unlike 3D Gaussians, 2D Gaussians provide view-consistent geometry while modeling surfaces intrinsically. To accurately recover thin surfaces and achieve stable optimization, we introduce a perspective-correct 2D splatting process utilizing ray-splat intersection and rasterization. Additionally, we incorporate depth distortion and normal consistency terms to further enhance the quality of the reconstructions. We demonstrate that our differentiable renderer allows for noise-free and detailed geometry reconstruction while maintaining competitive appearance quality, fast training speed, and real-time rendering.

This study proposes a novel video recommendation approach that leverages implicit user feedback in the form of viewing percentages and social network analysis techniques. By constructing a video similarity network based on user viewing patterns and computing centrality measures, the methodology identifies important and well-connected videos. Modularity analysis is then used to cluster closely related videos, forming the basis for personalized recommendations. For each user, candidate videos are selected from the cluster containing their preferred items and ranked using an ego-centric index that measures proximity to the user's likes and dislikes. The proposed approach was evaluated on real user data from an Asian video-on-demand platform. Offline experiments demonstrated improved accuracy compared to conventional methods such as Naive Bayes, SVM, decision trees, and nearest neighbor algorithms. An online user study further validated the effectiveness of the recommendations, with significant increases observed in click-through rate, view completion rate, and user satisfaction scores relative to the platform's existing system. These results underscore the value of incorporating implicit feedback and social network analysis for video recommendations. The key contributions of this research include a novel video recommendation framework that integrates implicit user data and social network analysis, the use of centrality measures and modularity-based clustering, an ego-centric ranking approach, and rigorous offline and online evaluation demonstrating superior performance compared to existing techniques. This study opens new avenues for enhancing video recommendations and user engagement in VOD platforms.

This paper introduces a novel competitive mechanism into differential evolution (DE), presenting an effective DE variant named competitive DE (CDE). CDE features a simple yet efficient mutation strategy: DE/winner-to-best/1. Essentially, the proposed DE/winner-to-best/1 strategy can be recognized as an intelligent integration of the existing mutation strategies of DE/rand-to-best/1 and DE/cur-to-best/1. The incorporation of DE/winner-to-best/1 and the competitive mechanism provide new avenues for advancing DE techniques. Moreover, in CDE, the scaling factor $F$ and mutation rate $Cr$ are determined by a random number generator following a normal distribution, as suggested by previous research. To investigate the performance of the proposed CDE, comprehensive numerical experiments are conducted on CEC2017 and engineering simulation optimization tasks, with CMA-ES, JADE, and other state-of-the-art optimizers and DE variants employed as competitor algorithms. The experimental results and statistical analyses highlight the promising potential of CDE as an alternative optimizer for addressing diverse optimization challenges.

Diffusion models have become the most popular approach to deep generative modeling of images, largely due to their empirical performance and reliability. From a theoretical standpoint, a number of recent works~\cite{chen2022,chen2022improved,benton2023linear} have studied the iteration complexity of sampling, assuming access to an accurate diffusion model. In this work, we focus on understanding the \emph{sample complexity} of training such a model; how many samples are needed to learn an accurate diffusion model using a sufficiently expressive neural network? Prior work~\cite{BMR20} showed bounds polynomial in the dimension, desired Total Variation error, and Wasserstein error. We show an \emph{exponential improvement} in the dependence on Wasserstein error and depth, along with improved dependencies on other relevant parameters.

Category discovery methods aim to find novel categories in unlabeled visual data. At training time, a set of labeled and unlabeled images are provided, where the labels correspond to the categories present in the images. The labeled data provides guidance during training by indicating what types of visual properties and features are relevant for performing discovery in the unlabeled data. As a result, changing the categories present in the labeled set can have a large impact on what is ultimately discovered in the unlabeled set. Despite its importance, the impact of labeled data selection has not been explored in the category discovery literature to date. We show that changing the labeled data can significantly impact discovery performance. Motivated by this, we propose two new approaches for automatically selecting the most suitable labeled data based on the similarity between the labeled and unlabeled data. Our observation is that, unlike in conventional supervised transfer learning, the best labeled is neither too similar, nor too dissimilar, to the unlabeled categories. Our resulting approaches obtains state-of-the-art discovery performance across a range of challenging fine-grained benchmark datasets.

A majority coloring of an undirected graph is a vertex coloring in which for each vertex there are at least as many bi-chromatic edges containing that vertex as monochromatic ones. It is known that for every countable graph a majority 3-coloring always exists. The Unfriendly Partition Conjecture states that every countable graph admits a majority 2-coloring. Since the 3-coloring result extends to countable DAGs, a variant of the conjecture states that 2 colors are enough to majority color every countable DAG. We show that this is false by presenting a DAG for which 3 colors are necessary. Presented construction is strongly based on a StackExchange conversation regarding labellings of infinite graphs that is linked in the references.

We propose a new deterministic algorithm called Subtree-Decomposition for the online transportation problem and show that the algorithm is $(8m-5)$-competitive, where $m$ is the number of server sites. It has long been known that the competitive ratio of any deterministic algorithm is lower bounded by $2m-1$ for this problem. On the other hand, the conjecture proposed by Kalyanasundaram and Pruhs in 1998 asking whether a deterministic $(2m-1)$-competitive algorithm exists for the online transportation problem has remained open for over two decades. The upper bound on the competitive ratio, $8m-5$, which is the result of this paper, is the first to come close to this conjecture, and is the best possible within a constant factor.

High spectral dimensionality and the shortage of annotations make hyperspectral image (HSI) classification a challenging problem. Recent studies suggest that convolutional neural networks can learn discriminative spatial features, which play a paramount role in HSI interpretation. However, most of these methods ignore the distinctive spectral-spatial characteristic of hyperspectral data. In addition, a large amount of unlabeled data remains an unexploited gold mine for efficient data use. Therefore, we proposed an integration of generative adversarial networks (GANs) and probabilistic graphical models for HSI classification. Specifically, we used a spectral-spatial generator and a discriminator to identify land cover categories of hyperspectral cubes. Moreover, to take advantage of a large amount of unlabeled data, we adopted a conditional random field to refine the preliminary classification results generated by GANs. Experimental results obtained using two commonly studied datasets demonstrate that the proposed framework achieved encouraging classification accuracy using a small number of data for training.

北京阿比特科技有限公司