亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Harnessing parity-time (PT) symmetry with balanced gain and loss profiles has created a variety of opportunities in electronics from wireless energy transfer to telemetry sensing and topological defect engineering. However, existing implementations often employ ad-hoc approaches at low operating frequencies and are unable to accommodate large-scale integration. Here, we report a fully integrated realization of PT-symmetry in a standard complementary metal-oxide-semiconductor technology. Our work demonstrates salient PT-symmetry features such as phase transition as well as the ability to manipulate broadband microwave generation and propagation beyond the limitations encountered by exiting schemes. The system shows 2.1 times bandwidth and 30 percentage noise reduction compared to conventional microwave generation in oscillatory mode and displays large non-reciprocal microwave transport from 2.75 to 3.10 gigahertz in non-oscillatory mode due to enhanced nonlinearities. This approach could enrich integrated circuit (IC) design methodology beyond well-established performance limits and enable the use of scalable IC technology to study topological effects in high-dimensional non-Hermitian systems.

相關內容

Integration:Integration, the VLSI Journal。 Explanation:集成,VLSI雜志。 Publisher:Elsevier。 SIT:

Quickly and reliably finding accurate inverse kinematics (IK) solutions remains a challenging problem for robotic manipulation. Existing numerical solvers are broadly applicable, but rely on local search techniques to manage highly nonconvex objective functions. Recently, learning-based approaches have shown promise as a means to generate fast and accurate IK results; learned solvers can easily be integrated with other learning algorithms in end-to-end systems. However, learning-based methods have an Achilles' heel: each robot of interest requires a specialized model which must be trained from scratch. To address this key shortcoming, we investigate a novel distance-geometric robot representation coupled with a graph structure that allows us to leverage the flexibility of graph neural networks (GNNs). We use this approach to train the first learned generative graphical inverse kinematics (GGIK) solver that is, crucially, "robot-agnostic"-a single model is able to provide IK solutions for a variety of different robots. Additionally, the generative nature of GGIK allows the solver to produce a large number of diverse solutions in parallel with minimal additional computation time, making it appropriate for applications such as sampling-based motion planning. Finally, GGIK can complement local IK solvers by providing reliable initializations. These advantages, as well as the ability to use task-relevant priors and to continuously improve with new data, suggest that GGIK has the potential to be a key component of flexible, learning-based robotic manipulation systems.

In many engineering applications it is useful to reason about "negative information". For example, in planning problems, providing an optimal solution is the same as giving a feasible solution (the "positive" information) together with a proof of the fact that there cannot be feasible solutions better than the one given (the "negative" information). We model negative information by introducing the concept of "norphisms", as opposed to the positive information of morphisms. A "nategory" is a category that has "nom"-sets in addition to hom-sets, and specifies the interaction between norphisms and morphisms. In particular, we have composition rules of the form $\text{morphism} + \text{norphism} \to \text{norphism}$. Norphisms do not compose by themselves; rather, they use morphisms as catalysts. After providing several applied examples, we connect nategories to enriched categtory theory. Specifically, we prove that categories enriched in de Paiva's dialectica categories $\mathbf{GC}$, in the case $\mathbf{C} = \mathbf{Set}$ and equipped with a modified monoidal product, define nategories which satisfy additional regularity properties. This formalizes negative information categorically in a way that makes negative and positive morphisms equal citizens.

Moving Object Detection (MOD) is a critical vision task for successfully achieving safe autonomous driving. Despite plausible results of deep learning methods, most existing approaches are only frame-based and may fail to reach reasonable performance when dealing with dynamic traffic participants. Recent advances in sensor technologies, especially the Event camera, can naturally complement the conventional camera approach to better model moving objects. However, event-based works often adopt a pre-defined time window for event representation, and simply integrate it to estimate image intensities from events, neglecting much of the rich temporal information from the available asynchronous events. Therefore, from a new perspective, we propose RENet, a novel RGB-Event fusion Network, that jointly exploits the two complementary modalities to achieve more robust MOD under challenging scenarios for autonomous driving. Specifically, we first design a temporal multi-scale aggregation module to fully leverage event frames from both the RGB exposure time and larger intervals. Then we introduce a bi-directional fusion module to attentively calibrate and fuse multi-modal features. To evaluate the performance of our network, we carefully select and annotate a sub-MOD dataset from the commonly used DSEC dataset. Extensive experiments demonstrate that our proposed method performs significantly better than the state-of-the-art RGB-Event fusion alternatives.

Full Waveform Inversion (FWI) is a successful and well-established inverse method for reconstructing material models from measured wave signals. In the field of seismic exploration, FWI has proven particularly successful in the reconstruction of smoothly varying material deviations. In contrast, non-destructive testing (NDT) often requires the detection and specification of sharp defects in a specimen. If the contrast between materials is low, FWI can be successfully applied to these problems as well. However, so far the method is not fully suitable to image defects such as voids, which are characterized by a high contrast in the material parameters. In this paper, we introduce a dimensionless scaling function $\gamma$ to model voids in the forward and inverse scalar wave equation problem. Depending on which material parameters this function $\gamma$ scales, different modeling approaches are presented, leading to three formulations of mono-parameter FWI and one formulation of two-parameter FWI. The resulting problems are solved by first-order optimization, where the gradient is computed by an ajdoint state method. The corresponding Fr\'echet kernels are derived for each approach and the associated minimization is performed using an L-BFGS algorithm. A comparison between the different approaches shows that scaling the density with $\gamma$ is most promising for parameterizing voids in the forward and inverse problem. Finally, in order to consider arbitrary complex geometries known a priori, this approach is combined with an immersed boundary method, the finite cell method (FCM).

The calculation of a three-dimensional underwater acoustic field has always been a key problem in computational ocean acoustics. Traditionally, this solution is usually obtained by directly solving the acoustic Helmholtz equation using a finite difference or finite element algorithm. Solving the three-dimensional Helmholtz equation directly is computationally expensive. For quasi-three-dimensional problems, the Helmholtz equation can be processed by the integral transformation approach, which can greatly reduce the computational cost. In this paper, a numerical algorithm for a quasi-three-dimensional sound field that combines an integral transformation technique, stepwise coupled modes and a spectral method is designed. The quasi-three-dimensional problem is transformed into a two-dimensional problem using an integral transformation strategy. A stepwise approximation is then used to discretize the range dependence of the two-dimensional problem; this approximation is essentially a physical discretization that further reduces the range-dependent two-dimensional problem to a one-dimensional problem. Finally, the Chebyshev--Tau spectral method is employed to accurately solve the one-dimensional problem. We provide the corresponding numerical program SPEC3D for the proposed algorithm and describe some representative numerical examples. In the numerical experiments, the consistency between SPEC3D and the analytical solution/high-precision finite difference program COACH verifies the reliability and capability of the proposed algorithm. A comparison of running times illustrates that the algorithm proposed in this paper is significantly faster than the full three-dimensional algorithm in terms of computational speed.

Much of the literature on optimal design of bandit algorithms is based on minimization of expected regret. It is well known that designs that are optimal over certain exponential families can achieve expected regret that grows logarithmically in the number of arm plays, at a rate governed by the Lai-Robbins lower bound. In this paper, we show that when one uses such optimized designs, the regret distribution of the associated algorithms necessarily has a very heavy tail, specifically, that of a truncated Cauchy distribution. Furthermore, for $p>1$, the $p$'th moment of the regret distribution grows much faster than poly-logarithmically, in particular as a power of the total number of arm plays. We show that optimized UCB bandit designs are also fragile in an additional sense, namely when the problem is even slightly mis-specified, the regret can grow much faster than the conventional theory suggests. Our arguments are based on standard change-of-measure ideas, and indicate that the most likely way that regret becomes larger than expected is when the optimal arm returns below-average rewards in the first few arm plays, thereby causing the algorithm to believe that the arm is sub-optimal. To alleviate the fragility issues exposed, we show that UCB algorithms can be modified so as to ensure a desired degree of robustness to mis-specification. In doing so, we also provide a sharp trade-off between the amount of UCB exploration and the tail exponent of the resulting regret distribution.

Classical results in general equilibrium theory assume divisible goods and convex preferences of market participants. In many real-world markets, participants have non-convex preferences and the allocation problem needs to consider complex constraints. Electricity markets are a prime example. In such markets, Walrasian prices are impossible, and heuristic pricing rules based on the dual of the relaxed allocation problem are used in practice. However, these rules have been criticized for high side-payments and inadequate congestion signals. We show that existing pricing heuristics optimize specific design goals that can be conflicting. The trade-offs can be substantial, and we establish that the design of pricing rules is fundamentally a multi-objective optimization problem addressing different incentives. In addition to traditional multi-objective optimization techniques using weighing of individual objectives, we introduce a novel parameter-free pricing rule that minimizes incentives for market participants to deviate locally. Our findings show how the new pricing rule capitalizes on the upsides of existing pricing rules under scrutiny today. It leads to prices that incur low make-whole payments while providing adequate congestion signals and low lost opportunity costs. Our suggested pricing rule does not require weighing of objectives, it is computationally scalable, and balances trade-offs in a principled manner, addressing an important policy issue in electricity markets.

The weakly compressible smoothed particle hydrodynamics (WCSPH) method has been employed to simulate various physical phenomena involving fluids and solids. Various methods have been proposed to implement the solid wall, and inlet/outlet and other boundary conditions. However, error estimation and the formal rates of convergence for these methods have not been discussed or examined carefully. In this paper, we use the method of manufactured solution (MMS) to verify the convergence properties of a variety of commonly employed of various solid, inlet, and outlet boundary implementations. In order to perform this study, we propose various manufactured solutions for different domains. On the basis of the convergence offered by these methods, we systematically propose a convergent WCSPH scheme along with suitable methods for implementing the boundary conditions. Along with other recent developments in the use of adaptive resolution, this paves the way for accurate and efficient simulation of incompressible or weakly-compressible fluid flows using the SPH method.

Neural architecture-based recommender systems have achieved tremendous success in recent years. However, when dealing with highly sparse data, they still fall short of expectation. Self-supervised learning (SSL), as an emerging technique to learn with unlabeled data, recently has drawn considerable attention in many fields. There is also a growing body of research proceeding towards applying SSL to recommendation for mitigating the data sparsity issue. In this survey, a timely and systematical review of the research efforts on self-supervised recommendation (SSR) is presented. Specifically, we propose an exclusive definition of SSR, on top of which we build a comprehensive taxonomy to divide existing SSR methods into four categories: contrastive, generative, predictive, and hybrid. For each category, the narrative unfolds along its concept and formulation, the involved methods, and its pros and cons. Meanwhile, to facilitate the development and evaluation of SSR models, we release an open-source library SELFRec, which incorporates multiple benchmark datasets and evaluation metrics, and has implemented a number of state-of-the-art SSR models for empirical comparison. Finally, we shed light on the limitations in the current research and outline the future research directions.

The growing energy and performance costs of deep learning have driven the community to reduce the size of neural networks by selectively pruning components. Similarly to their biological counterparts, sparse networks generalize just as well, if not better than, the original dense networks. Sparsity can reduce the memory footprint of regular networks to fit mobile devices, as well as shorten training time for ever growing networks. In this paper, we survey prior work on sparsity in deep learning and provide an extensive tutorial of sparsification for both inference and training. We describe approaches to remove and add elements of neural networks, different training strategies to achieve model sparsity, and mechanisms to exploit sparsity in practice. Our work distills ideas from more than 300 research papers and provides guidance to practitioners who wish to utilize sparsity today, as well as to researchers whose goal is to push the frontier forward. We include the necessary background on mathematical methods in sparsification, describe phenomena such as early structure adaptation, the intricate relations between sparsity and the training process, and show techniques for achieving acceleration on real hardware. We also define a metric of pruned parameter efficiency that could serve as a baseline for comparison of different sparse networks. We close by speculating on how sparsity can improve future workloads and outline major open problems in the field.

北京阿比特科技有限公司