亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We analyze the computational complexity of basic reconfiguration problems for the recently introduced surface Chemical Reaction Networks (sCRNs), where ordered pairs of adjacent species nondeterministically transform into a different ordered pair of species according to a predefined set of allowed transition rules (chemical reactions). In particular, two questions that are fundamental to the simulation of sCRNs are whether a given configuration of molecules can ever transform into another given configuration, and whether a given cell can ever contain a given species, given a set of transition rules. We show that these problems can be solved in polynomial time, are NP-complete, or are PSPACE-complete in a variety of different settings, including when adjacent species just swap instead of arbitrary transformation (swap sCRNs), and when cells can change species a limited number of times (k-burnout). Most problems turn out to be at least NP-hard except with very few distinct species (2 or 3).

相關內容

 Surface 是微軟公司( )旗下一系列使用 Windows 10(早期為 Windows 8.X)操作系統的電腦產品,目前有 Surface、Surface Pro 和 Surface Book 三個系列。 2012 年 6 月 18 日,初代 Surface Pro/RT 由時任微軟 CEO 史蒂夫·鮑爾默發布于在洛杉磯舉行的記者會,2012 年 10 月 26 日上市銷售。

Optimizing static risk-averse objectives in Markov decision processes is difficult because they do not admit standard dynamic programming equations common in Reinforcement Learning (RL) algorithms. Dynamic programming decompositions that augment the state space with discrete risk levels have recently gained popularity in the RL community. Prior work has shown that these decompositions are optimal when the risk level is discretized sufficiently. However, we show that these popular decompositions for Conditional-Value-at-Risk (CVaR) and Entropic-Value-at-Risk (EVaR) are inherently suboptimal regardless of the discretization level. In particular, we show that a saddle point property assumed to hold in prior literature may be violated. However, a decomposition does hold for Value-at-Risk and our proof demonstrates how this risk measure differs from CVaR and EVaR. Our findings are significant because risk-averse algorithms are used in high-stake environments, making their correctness much more critical.

We present a computational modelling approach which targets at capturing the specifics on how to virtually augment a Metaverse user's available social time capacity via using an independent and autonomous version of her digital representation in the Metaverse. We envision a Metaverse-focused extension of the traditional avatar concept: An avatar can be as well programmed to operate independently when its user is not controlling it directly, thus turning it into an agent-based digital human representation. This way, the user can virtually delegate on the avatar socializing time required for maintaining the existing contacts, so as to eventually maintain spare non-avatar-mediated socializing time which can be potentially invested in additional socialization activities. We model the setting and identify the characteristic variables via using selected concepts from social sciences: ego networks, social presence, and social cues. Then, we formulate the problem of maximizing the user's non-avatar-mediated spare time as a linear optimization. Finally, we analyze the feasible region of the problem and we present some initial insights on the spare time that can be achieved for different parameter values of the avatar-mediated interactions.

Morphing quadrotors with four external actuators can adapt to different restricted scenarios by changing their geometric structure. However, previous works mainly focus on the improvements in structures and controllers, and existing planning algorithms don't consider the morphological modifications, which leads to safety and dynamic feasibility issues. In this paper, we propose a unified planning and control framework for morphing quadrotors to deform autonomously and efficiently. The framework consists of a milliseconds-level spatial-temporal trajectory optimizer that takes into account the morphological modifications of quadrotors. The optimizer can generate full-body safety trajectories including position and attitude. Additionally, it incorporates a nonlinear attitude controller that accounts for aerodynamic drag and dynamically adjusts dynamic parameters such as the inertia tensor and Center of Gravity. The controller can also online compute the thrust coefficient during morphing. Benchmark experiments compared with existing methods validate the robustness of the proposed controller. Extensive simulations and real-world experiments are performed to demonstrate the effectiveness of the proposed framework.

The purpose of this work is to present an effective tool for computing different QR-decompositions of a complex nonsingular square matrix. The concept of the discrete signal-induced heap transform (DsiHT, Grigoryan 2006) is used. This transform is fast, has a unique algorithm for any length of the input vector/signal and can be used with different complex basic 2x2 transforms. The DsiHT zeroes all components of the input signal while moving or heaping the energy of the signal into one component, such as the first. We describe three different types of QR-decompositions that use the basic transforms with the T, G, and M-type complex matrices we introduce, and also without matrices, but using analytical formulas. We also present the mixed QR-decomposition, when different type DsiHTs are used at different stages of the algorithm. The number of such decompositions is greater than 3^((N-1)), for an NxN complex matrix. Examples of the QR-decomposition are described in detail for the 4x4 and 6x6 complex matrices and compared with the known method of Householder transforms. The precision of the QR-decompositions of NxN matrices, when N are 6, 13, 17, 19, 21, 40, 64, 100, 128, 201, 256, and 400 is also compared. The MATLAB-based scripts of the codes for QR-decompositions by the described DsiHTs are given.

Multiplicative Programming (MP) pertains to a spectrum of optimization problems that involve product term(s). As computational paradigms of communication systems continue to evolve, particularly concerning the offloading strategies of computationally intensive tasks simultaneously to centralized or decentralized servers, designing or optimizing effective communication systems with MP techniques becomes increasingly indispensable. Similarly, Fractional Programming (FP) is another significant branch in the optimization domain, addressing various essential scenarios in communication. For instance, in minimization optimization problems, transmission power and processing delay of communication systems are considered critical metrics. In a very recent JSAC paper by Zhao et al. [2], an innovative transform (Zhao's Optimization Transform) was proposed for solving the minimization of MP and FP problems. Nevertheless, the resolution of optimization problems in communication systems encounters several limitations when adopting Zhao's optimization transform, especially in MP problems. Primarily, objective functions proposed in these optimization problems typically involve sum-of-products terms and the optimization variables are always discrete leading to NP-hard problems. Furthermore, multiple functions mapping to the non-negative domain in these scenarios can result in auxiliary variables being zero values, while the same situation is avoidable in FP problems due to the presence of these functions in the denominator. In this paper, we introduce an updated transform, building on the foundations of Zhao's original method, designed to effectively overcome these challenges by reformulating the original problem into a series of convex or concave problems. This introduced problem reformulation provides a superior iteration algorithm with demonstrable convergence to a stationary point.

Valued constraint satisfaction problems (VCSPs) are a large class of computational optimisation problems. If the variables of a VCSP take values from a finite domain, then recent results in constraint satisfaction imply that the problem is in P or NP-complete, depending on the set of admitted cost functions. Here we study the larger class of cost functions over countably infinite domains that have an oligomorphic automorphism group. We present a hardness condition based on a generalisation of pp-constructability as known for (classical) CSPs. We also provide a universal-algebraic polynomial-time tractability condition, based on the concept of fractional polymorphisms. We apply our general theory to study the computational complexity of resilience problems in database theory (under bag semantics). We show how to construct, for every fixed conjunctive query (and more generally for every union of conjunctive queries), a set of cost functions with an oligomorphic automorphism group such that the resulting VCSP is polynomial-time equivalent to the resilience problem; we only require that the query is connected and show that this assumption can be made without loss of generality. For the case where the query is acylic, we obtain a complexity dichotomy of the resilience problem, based on the dichotomy for finite-domain VCSPs. To illustrate the utility of our methods, we exemplarily settle the complexity of a (non-acyclic) conjunctive query whose computational complexity remained open in the literature by verifying that it satisfies our tractability condition. We conjecture that for resilience problems, our hardness and tractability conditions match, which would establish a complexity dichotomy for resilience problems for (unions of) conjunctive queries.

This manuscript puts forward novel practicable spatiotemporal Bayesian factor analysis frameworks computationally feasible for moderate to large data. Our models exhibit significantly enhanced computational scalability and storage efficiency, deliver high overall modeling performances, and possess powerful inferential capabilities for adequately predicting outcomes at future time points or new spatial locations and satisfactorily clustering spatial locations into regions with similar temporal trajectories, a frequently encountered crucial task. We integrate on top of a baseline separable factor model with temporally dependent latent factors and spatially dependent factor loadings under a probit stick breaking process (PSBP) prior a new slice sampling algorithm that permits unknown varying numbers of spatial mixture components across all factors and guarantees them to be non-increasing through the MCMC iterations, thus considerably enhancing model flexibility, efficiency, and scalability. We further introduce a novel spatial latent nearest-neighbor Gaussian process (NNGP) prior and new sequential updating algorithms for the spatially varying latent variables in the PSBP prior, thereby attaining high spatial scalability. The markedly accelerated posterior sampling and spatial prediction as well as the great modeling and inferential performances of our models are substantiated by our simulation experiments.

We define the weighted combinatorial Laplacian operators on a simplicial complex and investigate their spectral properties. Eigenvalues close to zero and the corresponding eigenvectors of them are especially of our interest, and we show that they can detect almost $n$-dimensional holes in the given complex. Real-valued weights on simplices allow gradient descent based optimization, which in turn gives an efficient dynamic coverage repair algorithm for the sensor network of a mobile robot team.

Code provides a general syntactic structure to build complex programs and perform precise computations when paired with a code interpreter - we hypothesize that language models (LMs) can leverage code-writing to improve Chain of Thought reasoning not only for logic and arithmetic tasks, but also for semantic ones (and in particular, those that are a mix of both). For example, consider prompting an LM to write code that counts the number of times it detects sarcasm in an essay: the LM may struggle to write an implementation for "detect_sarcasm(string)" that can be executed by the interpreter (handling the edge cases would be insurmountable). However, LMs may still produce a valid solution if they not only write code, but also selectively "emulate" the interpreter by generating the expected output of "detect_sarcasm(string)" and other lines of code that cannot be executed. In this work, we propose Chain of Code (CoC), a simple yet surprisingly effective extension that improves LM code-driven reasoning. The key idea is to encourage LMs to format semantic sub-tasks in a program as flexible pseudocode that the interpreter can explicitly catch undefined behaviors and hand off to simulate with an LM (as an "LMulator"). Experiments demonstrate that Chain of Code outperforms Chain of Thought and other baselines across a variety of benchmarks; on BIG-Bench Hard, Chain of Code achieves 84%, a gain of 12% over Chain of Thought. CoC scales well with large and small models alike, and broadens the scope of reasoning questions that LMs can correctly answer by "thinking in code". Project webpage: //chain-of-code.github.io.

An effective and efficient architecture performance evaluation scheme is essential for the success of Neural Architecture Search (NAS). To save computational cost, most of existing NAS algorithms often train and evaluate intermediate neural architectures on a small proxy dataset with limited training epochs. But it is difficult to expect an accurate performance estimation of an architecture in such a coarse evaluation way. This paper advocates a new neural architecture evaluation scheme, which aims to determine which architecture would perform better instead of accurately predict the absolute architecture performance. Therefore, we propose a \textbf{relativistic} architecture performance predictor in NAS (ReNAS). We encode neural architectures into feature tensors, and further refining the representations with the predictor. The proposed relativistic performance predictor can be deployed in discrete searching methods to search for the desired architectures without additional evaluation. Experimental results on NAS-Bench-101 dataset suggests that, sampling 424 ($0.1\%$ of the entire search space) neural architectures and their corresponding validation performance is already enough for learning an accurate architecture performance predictor. The accuracies of our searched neural architectures on NAS-Bench-101 and NAS-Bench-201 datasets are higher than that of the state-of-the-art methods and show the priority of the proposed method.

北京阿比特科技有限公司