亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Tensors are often studied by introducing preorders such as restriction and degeneration: the former describes transformations of the tensors by local linear maps on its tensor factors; the latter describes transformations where the local linear maps may vary along a curve, and the resulting tensor is expressed as a limit along this curve. In this work we introduce and study partial degeneration, a special version of degeneration where one of the local linear maps is constant whereas the others vary along a curve. Motivated by algebraic complexity, quantum entanglement and tensor networks, we present constructions based on matrix multiplication tensors and find examples by making a connection to the theory of prehomogenous tensor spaces. We highlight the subtleties of this new notion by showing obstruction and classification results for the unit tensor. To this end, we study the notion of aided rank, a natural generalization of tensor rank. The existence of partial degenerations gives strong upper bounds on the aided rank of the tensor, which in turn allows one to turn degenerations into restrictions. In particular, we present several examples, based on the W-tensor and the Coppersmith-Winograd tensors, where lower bounds on aided rank provide obstructions to the existence of certain partial degenerations.

相關內容

We study the problem of planning restless multi-armed bandits (RMABs) with multiple actions. This is a popular model for multi-agent systems with applications like multi-channel communication, monitoring and machine maintenance tasks, and healthcare. Whittle index policies, which are based on Lagrangian relaxations, are widely used in these settings due to their simplicity and near-optimality under certain conditions. In this work, we first show that Whittle index policies can fail in simple and practically relevant RMAB settings, even when the RMABs are indexable. We discuss why the optimality guarantees fail and why asymptotic optimality may not translate well to practically relevant planning horizons. We then propose an alternate planning algorithm based on the mean-field method, which can provably and efficiently obtain near-optimal policies with a large number of arms, without the stringent structural assumptions required by the Whittle index policies. This borrows ideas from existing research with some improvements: our approach is hyper-parameter free, and we provide an improved non-asymptotic analysis which has: (a) no requirement for exogenous hyper-parameters and tighter polynomial dependence on known problem parameters; (b) high probability bounds which show that the reward of the policy is reliable; and (c) matching sub-optimality lower bounds for this algorithm with respect to the number of arms, thus demonstrating the tightness of our bounds. Our extensive experimental analysis shows that the mean-field approach matches or outperforms other baselines.

The unit selection problem aims to identify objects, called units, that are most likely to exhibit a desired mode of behavior when subjected to stimuli (e.g., customers who are about to churn but would change their mind if encouraged). Unit selection with counterfactual objective functions was introduced relatively recently with existing work focusing on bounding a specific class of objective functions, called the benefit functions, based on observational and interventional data -- assuming a fully specified model is not available to evaluate these functions. We complement this line of work by proposing the first exact algorithm for finding optimal units given a broad class of causal objective functions and a fully specified structural causal model (SCM). We show that unit selection under this class of objective functions is $\text{NP}^\text{PP}$-complete but is $\text{NP}$-complete when unit variables correspond to all exogenous variables in the SCM. We also provide treewidth-based complexity bounds on our proposed algorithm while relating it to a well-known algorithm for Maximum a Posteriori (MAP) inference.

In this paper, we propose new algorithms for evacuation problems defined on dynamic flow networks. A dynamic flow network is a directed graph in which source nodes are given supplies (i.e., the number of evacuees) and a single sink node is given a demand (i.e., the maximum number of acceptable evacuees). The evacuation problem seeks a dynamic flow that sends all supplies from sources to the sink such that its demand is satisfied in the minimum feasible time horizon. For this problem, the current best algorithms are developed by Schl\"oter (2018) and Kamiyama (2019), which run in strongly polynomial time but with highorder polynomial time complexity because they use submodular function minimization as a subroutine. In this paper, we propose new algorithms that do not explicitly execute submodular function minimization, and we prove that they are faster than those by Schl\"oter (2018) and Kamiyama (2019) when an input network is restricted such that the sink has a small in-degree and every edge has the same capacity.

In this work, we focus on the Neumann-Neumann method (NNM), which is one of the most popular non-overlapping domain decomposition methods. Even though the NNM is widely used and proves itself very efficient when applied to discrete problems in practical applications, it is in general not well defined at the continuous level when the geometric decomposition involves cross-points. Our goals are to investigate this well-posedness issue and to provide a complete analysis of the method at the continuous level, when applied to a simple elliptic problem on a configuration involving one cross-point. More specifically, we prove that the algorithm generates solutions that are singular near the cross-points. We also exhibit the type of singularity introduced by the method, and show how it propagates through the iterations. Then, based on this analysis, we design a new set of transmission conditions that makes the new NNM geometrically convergent for this simple configuration. Finally, we illustrate our results with numerical experiments.

We present an adapted construction of algebraic circuits over the reals introduced by Cucker and Meer to arbitrary infinite integral domains and generalize the $\mathrm{AC}_{\mathbb{R}}$ and $\mathrm{NC}_{\mathbb{R}}$-classes for this setting. We give a theorem in the style of Immerman's theorem which shows that for these adapted formalisms, sets decided by circuits of constant depth and polynomial size are the same as sets definable by a suitable adaptation of first-order logic. Additionally, we discuss a generalization of the guarded predicative logic by Durand, Haak and Vollmer and we show characterizations for the $\mathrm{AC}_{R}$ and $\mathrm{NC}_{R}$ hierarchy. Those generalizations apply to the Boolean $\mathrm{AC}$ and $\mathrm{NC}$ hierarchies as well. Furthermore, we introduce a formalism to be able to compare some of the aforementioned complexity classes with different underlying integral domains.

We design an adaptive virtual element method (AVEM) of lowest order over triangular meshes with hanging nodes in 2d, which are treated as polygons. AVEM hinges on the stabilization-free a posteriori error estimators recently derived in [8]. The crucial property, that also plays a central role in this paper, is that the stabilization term can be made arbitrarily small relative to the a posteriori error estimators upon increasing the stabilization parameter. Our AVEM concatenates two modules, GALERKIN and DATA. The former deals with piecewise constant data and is shown in [8] to be a contraction between consecutive iterates. The latter approximates general data by piecewise constants to a desired accuracy. AVEM is shown to be convergent and quasi-optimal, in terms of error decay versus degrees of freedom, for solutions and data belonging to appropriate approximation classes. Numerical experiments illustrate the interplay between these two modules and provide computational evidence of optimality.

Kinetic equations model the position-velocity distribution of particles subject to transport and collision effects. Under a diffusive scaling, these combined effects converge to a diffusion equation for the position density in the limit of an infinite collision rate. Despite this well-defined limit, numerical simulation is expensive when the collision rate is high but finite, as small time steps are then required. In this work, we present an asymptotic-preserving multilevel Monte Carlo particle scheme that makes use of this diffusive limit to accelerate computations. In this scheme, we first sample the diffusive limiting model to compute a biased initial estimate of a Quantity of Interest, using large time steps. We then perform a limited number of finer simulations with transport and collision dynamics to correct the bias. The efficiency of the multilevel method depends on being able to perform correlated simulations of particles on a hierarchy of discretization levels. We present a method for correlating particle trajectories and present both an analysis and numerical experiments. We demonstrate that our approach significantly reduces the cost of particle simulations in high-collisional regimes, compared with prior work, indicating significant potential for adopting these schemes in various areas of active research.

When implementing the gradient descent method in low precision, the employment of stochastic rounding schemes helps to prevent stagnation of convergence caused by the vanishing gradient effect. Unbiased stochastic rounding yields zero bias by preserving small updates with probabilities proportional to their relative magnitudes. This study provides a theoretical explanation for the stagnation of the gradient descent method in low-precision computation. Additionally, we propose two new stochastic rounding schemes that trade the zero bias property with a larger probability to preserve small gradients. Our methods yield a constant rounding bias that, on average, lies in a descent direction. For convex problems, we prove that the proposed rounding methods typically have a beneficial effect on the convergence rate of gradient descent. We validate our theoretical analysis by comparing the performances of various rounding schemes when optimizing a multinomial logistic regression model and when training a simple neural network with an 8-bit floating-point format.

Systems consisting of spheres rolling on elastic membranes have been used as educational tools to introduce a core conceptual idea of General Relativity (GR): how curvature guides the movement of matter. However, previous studies have revealed that such schemes cannot accurately represent relativistic dynamics in the laboratory. Dissipative forces cause the initially GR-like dynamics to be transient and consequently restrict experimental study to only the beginnings of trajectories; dominance of Earth's gravity forbids the difference between spatial and temporal spacetime curvatures. Here by developing a mapping between dynamics of a wheeled vehicle on a spandex membrane, we demonstrate that an active object that can prescribe its speed can not only obtain steady-state orbits, but also use the additional parameters such as speed to tune the orbits towards relativistic dynamics. Our mapping demonstrates how activity mixes space and time in a metric, shows how active particles do not necessarily follow geodesics in the real space but instead follow geodesics in a fiducial spacetime. The mapping further reveals how parameters such as the membrane elasticity and instantaneous speed allow programming a desired spacetime such as the Schwarzschild metric near a non-rotating black hole. Our mapping and framework point the way to the possibility to create a robophysical analog gravity system in the laboratory at low cost and provide insights into active matter in deformable environments and robot exploration in complex landscapes.

Unsupervised domain adaptation has recently emerged as an effective paradigm for generalizing deep neural networks to new target domains. However, there is still enormous potential to be tapped to reach the fully supervised performance. In this paper, we present a novel active learning strategy to assist knowledge transfer in the target domain, dubbed active domain adaptation. We start from an observation that energy-based models exhibit free energy biases when training (source) and test (target) data come from different distributions. Inspired by this inherent mechanism, we empirically reveal that a simple yet efficient energy-based sampling strategy sheds light on selecting the most valuable target samples than existing approaches requiring particular architectures or computation of the distances. Our algorithm, Energy-based Active Domain Adaptation (EADA), queries groups of targe data that incorporate both domain characteristic and instance uncertainty into every selection round. Meanwhile, by aligning the free energy of target data compact around the source domain via a regularization term, domain gap can be implicitly diminished. Through extensive experiments, we show that EADA surpasses state-of-the-art methods on well-known challenging benchmarks with substantial improvements, making it a useful option in the open world. Code is available at //github.com/BIT-DA/EADA.

北京阿比特科技有限公司