亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Shape morphing that transforms morphologies in response to stimuli is crucial for future multifunctional systems. While kirigami holds great promise in enhancing shape-morphing, existing designs primarily focus on kinematics and overlook the underlying physics. This study introduces a differentiable inverse design framework that considers the physical interplay between geometry, materials, and stimuli of active kirigami, made by soft material embedded with magnetic particles, to realize target shape-morphing upon magnetic excitation. We achieve this by combining differentiable kinematics and energy models into a constrained optimization, simultaneously designing the cuts and magnetization orientations to ensure kinematic and physical feasibility. Complex kirigami designs are obtained automatically with unparallel efficiency, which can be remotely controlled to morph into intricate target shapes and even multiple states. The proposed framework can be extended to accommodate various active systems, bridging geometry and physics to push the frontiers in shape-morphing applications, like flexible electronics and minimally invasive surgery.

相關內容

Problems from metric graph theory such as Metric Dimension, Geodetic Set, and Strong Metric Dimension have recently had a strong impact on the field of parameterized complexity by being the first problems in NP to admit double-exponential lower bounds in the treewidth, and even in the vertex cover number for the latter. We initiate the study of enumerating minimal solution sets for these problems and show that they are also of great interest in enumeration. More specifically, we show that enumerating minimal resolving sets in graphs and minimal geodetic sets in split graphs are equivalent to hypergraph dualization, arguably one of the most important open problems in algorithmic enumeration. This provides two new natural examples to a question that emerged in different works this last decade: for which vertex (or edge) set graph property $\Pi$ is the enumeration of minimal (or maximal) subsets satisfying $\Pi$ equivalent to hypergraph dualization? As only very few properties are known to fit within this context -- namely, properties related to minimal domination -- our results make significant progress in characterizing such properties, and provide new angles of approach for tackling hypergraph dualization. In a second step, we consider cases where our reductions do not apply, namely graphs with no long induced paths, and show these cases to be mainly tractable.

We introduce a flexible method to simultaneously infer both the drift and volatility functions of a discretely observed scalar diffusion. We introduce spline bases to represent these functions and develop a Markov chain Monte Carlo algorithm to infer, a posteriori, the coefficients of these functions in the spline basis. A key innovation is that we use spline bases to model transformed versions of the drift and volatility functions rather than the functions themselves. The output of the algorithm is a posterior sample of plausible drift and volatility functions that are not constrained to any particular parametric family. The flexibility of this approach provides practitioners a powerful investigative tool, allowing them to posit a variety of parametric models to better capture the underlying dynamics of their processes of interest. We illustrate the versatility of our method by applying it to challenging datasets from finance, paleoclimatology, and astrophysics. In view of the parametric diffusion models widely employed in the literature for those examples, some of our results are surprising since they call into question some aspects of these models.

We present an efficient preconditioner for linear problems $A x=y$. It guarantees monotonic convergence of the memory-efficient fixed-point iteration for all accretive systems of the form $A = L + V$, where $L$ is an approximation of $A$, and the system is scaled so that the discrepancy is bounded with $\lVert V \rVert<1$. In contrast to common splitting preconditioners, our approach is not restricted to any particular splitting. Therefore, the approximate problem can be chosen so that an analytic solution is available to efficiently evaluate the preconditioner. We prove that the only preconditioner with this property has the form $(L+I)(I - V)^{-1}$. This unique form moreover permits the elimination of the forward problem from the preconditioned system, often halving the time required per iteration. We demonstrate and evaluate our approach for wave problems, diffusion problems, and pantograph delay differential equations. With the latter we show how the method extends to general, not necessarily accretive, linear systems.

Gaussian processes (GPs) are popular nonparametric statistical models for learning unknown functions and quantifying the spatiotemporal uncertainty in data. Recent works have extended GPs to model scalar and vector quantities distributed over non-Euclidean domains, including smooth manifolds appearing in numerous fields such as computer vision, dynamical systems, and neuroscience. However, these approaches assume that the manifold underlying the data is known, limiting their practical utility. We introduce RVGP, a generalisation of GPs for learning vector signals over latent Riemannian manifolds. Our method uses positional encoding with eigenfunctions of the connection Laplacian, associated with the tangent bundle, readily derived from common graph-based approximation of data. We demonstrate that RVGP possesses global regularity over the manifold, which allows it to super-resolve and inpaint vector fields while preserving singularities. Furthermore, we use RVGP to reconstruct high-density neural dynamics derived from low-density EEG recordings in healthy individuals and Alzheimer's patients. We show that vector field singularities are important disease markers and that their reconstruction leads to a comparable classification accuracy of disease states to high-density recordings. Thus, our method overcomes a significant practical limitation in experimental and clinical applications.

We consider a kinetic model of an N-species gas mixture modeled with quantum Bhatnagar-Gross-Krook (BGK) collision operators. The collision operators consist of a relaxation to a Maxwell distribution in the classical case, a Fermi distribution for fermions and a Bose-Einstein distribution for bosons. In this paper we present a numerical method for simulating this model, which uses an Implicit-Explicit (IMEX) scheme to minimize a certain potential function. This is motivated by theoretical considerations coming from entropy minimization. We show that theoretical properties such as conservation of mass, total momentum and total energy as well as positivity of the distribution functions are preserved by the numerical method presented in this paper, and illustrate its usefulness and effectiveness with numerical examples

We present ReCAT, a recursive composition augmented Transformer that is able to explicitly model hierarchical syntactic structures of raw texts without relying on gold trees during both learning and inference. Existing research along this line restricts data to follow a hierarchical tree structure and thus lacks inter-span communications. To overcome the problem, we propose a novel contextual inside-outside (CIO) layer that learns contextualized representations of spans through bottom-up and top-down passes, where a bottom-up pass forms representations of high-level spans by composing low-level spans, while a top-down pass combines information inside and outside a span. By stacking several CIO layers between the embedding layer and the attention layers in Transformer, the ReCAT model can perform both deep intra-span and deep inter-span interactions, and thus generate multi-grained representations fully contextualized with other spans. Moreover, the CIO layers can be jointly pre-trained with Transformers, making ReCAT enjoy scaling ability, strong performance, and interpretability at the same time. We conduct experiments on various sentence-level and span-level tasks. Evaluation results indicate that ReCAT can significantly outperform vanilla Transformer models on all span-level tasks and baselines that combine recursive networks with Transformers on natural language inference tasks. More interestingly, the hierarchical structures induced by ReCAT exhibit strong consistency with human-annotated syntactic trees, indicating good interpretability brought by the CIO layers.

We show how quantum-inspired 2d tensor networks can be used to efficiently and accurately simulate the largest quantum processors from IBM, namely Eagle (127 qubits), Osprey (433 qubits) and Condor (1121 qubits). We simulate the dynamics of a complex quantum many-body system -- specifically, the kicked Ising experiment considered recently by IBM in Nature 618, p. 500-505 (2023) -- using graph-based Projected Entangled Pair States (gPEPS), which was proposed by some of us in PRB 99, 195105 (2019). Our results show that simple tensor updates are already sufficient to achieve very large unprecedented accuracy with remarkably low computational resources for this model. Apart from simulating the original experiment for 127 qubits, we also extend our results to 433 and 1121 qubits, thus setting a benchmark for the newest IBM quantum machines. We also report accurate simulations for infinitely-many qubits. Our results show that gPEPS are a natural tool to efficiently simulate quantum computers with an underlying lattice-based qubit connectivity, such as all quantum processors based on superconducting qubits.

We show how to reduce the computational time of the practical implementation of the Raviart-Thomas mixed method for second-order elliptic problems. The implementation takes advantage of a recent result which states that certain local subspaces of the vector unknown can be eliminated from the equations by transforming them into stabilization functions; see the paper published online in JJIAM on August 10, 2023. We describe in detail the new implementation (in MATLAB and a laptop with Intel(R) Core (TM) i7-8700 processor which has six cores and hyperthreading) and present numerical results showing 10 to 20% reduction in the computational time for the Raviart-Thomas method of index $k$, with $k$ ranging from 1 to 20, applied to a model problem.

We study the multivariate deconvolution problem of recovering the distribution of a signal from independent and identically distributed observations additively contaminated with random errors (noise) from a known distribution. For errors with independent coordinates having ordinary smooth densities, we derive an inversion inequality relating the $L^1$-Wasserstein distance between two distributions of the signal to the $L^1$-distance between the corresponding mixture densities of the observations. This smoothing inequality outperforms existing inversion inequalities. As an application of the inversion inequality to the Bayesian framework, we consider $1$-Wasserstein deconvolution with Laplace noise in dimension one using a Dirichlet process mixture of normal densities as a prior measure on the mixing distribution (or distribution of the signal). We construct an adaptive approximation of the sampling density by convolving the Laplace density with a well-chosen mixture of normal densities and show that the posterior measure concentrates around the sampling density at a nearly minimax rate, up to a log-factor, in the $L^1$-distance. The same posterior law is also shown to automatically adapt to the unknown Sobolev regularity of the mixing density, thus leading to a new Bayesian adaptive estimation procedure for mixing distributions with regular densities under the $L^1$-Wasserstein metric. We illustrate utility of the inversion inequality also in a frequentist setting by showing that an appropriate isotone approximation of the classical kernel deconvolution estimator attains the minimax rate of convergence for $1$-Wasserstein deconvolution in any dimension $d\geq 1$, when only a tail condition is required on the latent mixing density and we derive sharp lower bounds for these problems

We generalise a hybridized discontinuous Galerkin method for incompressible flow problems to non-affine cells, showing that with a suitable element mapping the generalised method preserves a key invariance property that eludes most methods, namely that any irrotational component of the prescribed force is exactly balanced by the pressure gradient and does not affect the velocity field. This invariance property can be preserved in the discrete problem if the incompressibility constraint is satisfied in a sufficiently strong sense. We derive sufficient conditions to guarantee discretely divergence-free functions are exactly divergence-free and give examples of divergence-free finite elements on meshes with triangular, quadrilateral, tetrahedral, or hexahedral cells generated by a (possibly non-affine) map from their respective reference cells. In the case of quadrilateral cells, we prove an optimal error estimate for the velocity field that does not depend on the pressure approximation. Our analysis is supported by numerical results.

北京阿比特科技有限公司