亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A general optimization framework is proposed for simultaneously transmitting and reflecting reconfigurable surfaces (STAR-RISs) with coupled phase shifts, which converges to the Karush-Kuhn-Tucker (KKT) optimal solution under some mild conditions. More particularly, the amplitude and phase-shift coefficients of STAR-RISs are optimized alternatively in closed form. To demonstrate the effectiveness of the proposed optimization framework, the throughput maximization problem is considered in a case study. It is rigorously proved that the KKT optimal solution is obtained. Numerical results confirm the effectiveness of the proposed optimization framework compared to baseline schemes.

相關內容

In this paper we consider change-points in multiple sequences with the objective of minimizing the estimation error of a sequence by making use of information from other sequences. This is in contrast to recent interest on change-points in multiple sequences where the focus is on detection of common change-points. We start with the canonical case of a single sequence with constant change-point intensities. We consider two measures of a change-point algorithm. The first is the probability of estimating the change-point with no error. The second is the expected distance between the true and estimated change-points. We provide a theoretical upper bound for the no error probability, and a lower bound for the expected distance, that must be satisfied by all algorithms. We propose a scan-CUSUM algorithm that achieves the no error upper bound and come close to the distance lower bound. We next consider the case of non-constant intensities and establish sharp conditions under which estimation error can go to zero. We propose an extension of the scan-CUSUM algorithm for a non-constant intensity function, and show that it achieves asymptotically zero error at the boundary of the zero-error regime. We illustrate an application of the scan-CUSUM algorithm on multiple sequences sharing an unknown, non-constant intensity function. We estimate the intensity function from the change-point profile likelihoods of all sequences and apply scan-CUSUM on the estimated intensity function.

Traditional, numerical discretization-based solvers of partial differential equations (PDEs) are fundamentally agnostic to domains, boundary conditions and coefficients. In contrast, machine learnt solvers have a limited generalizability across these elements of boundary value problems. This is strongly true in the case of surrogate models that are typically trained on direct numerical simulations of PDEs applied to one specific boundary value problem. In a departure from this direct approach, the label-free machine learning of solvers is centered on a loss function that incorporates the PDE and boundary conditions in residual form. However, their generalization across boundary conditions is limited and they remain strongly domain-dependent. Here, we present a framework that generalizes across domains, boundary conditions and coefficients simultaneously with learning the PDE in weak form. Our work explores the ability of simple, convolutional neural network (CNN)-based encoder-decoder architectures to learn to solve a PDE in greater generality than its restriction to a particular boundary value problem. In this first communication, we consider the elliptic PDEs of Fickean diffusion, linear and nonlinear elasticity. Importantly, the learning happens independently of any labelled field data from either experiments or direct numerical solutions. Extensive results for these problem classes demonstrate the framework's ability to learn PDE solvers that generalize across hundreds of thousands of domains, boundary conditions and coefficients, including extrapolation beyond the learning regime. Once trained, the machine learning solvers are orders of magnitude faster than discretization-based solvers. We place our work in the context of recent continuous operator learning frameworks, and note extensions to transfer learning, active learning and reinforcement learning.

Deep Q-learning based algorithms have been applied successfully in many decision making problems, while their theoretical foundations are not as well understood. In this paper, we study a Fitted Q-Iteration with two-layer ReLU neural network parameterization, and find the sample complexity guarantees for the algorithm. Our approach estimates the Q-function in each iteration using a convex optimization problem. We show that this approach achieves a sample complexity of $\tilde{\mathcal{O}}(1/\epsilon^{2})$, which is order-optimal. This result holds for a countable state-spaces and does not require any assumptions such as a linear or low rank structure on the MDP.

Modern policy optimization methods in applied reinforcement learning are often inspired by the trust region policy optimization algorithm, which can be interpreted as a particular instance of policy mirror descent. While theoretical guarantees have been established for this framework, particularly in the tabular setting, the use of a general parametrization scheme remains mostly unjustified. In this work, we introduce a novel framework for policy optimization based on mirror descent that naturally accommodates general parametrizations. The policy class induced by our scheme recovers known classes, e.g. tabular softmax, log-linear, and neural policies. It also generates new ones, depending on the choice of the mirror map. For a general mirror map and parametrization function, we establish the quasi-monotonicity of the updates in value function, global linear convergence rates, and we bound the total variation of the algorithm along its path. To showcase the ability of our framework to accommodate general parametrization schemes, we present a case study involving shallow neural networks.

Spiking neural network (SNN) operating with asynchronous discrete events shows higher energy efficiency. A popular approach to implementing deep SNNs is ANN-SNN conversion combining both efficient training of ANNs and efficient inference of SNNs. However, due to the intrinsic difference between ANNs and SNNs, the accuracy loss is usually non-negligible, especially under low simulating steps. It restricts the applications of SNN on latency-sensitive edge devices greatly. In this paper, we identify such performance degradation stems from the misrepresentation of the negative or overflow residual membrane potential in SNNs. Inspired by this, we systematically analyze the conversion error between SNNs and ANNs, and then decompose it into three folds: quantization error, clipping error, and residual membrane potential representation error. With such insights, we propose a dual-phase conversion algorithm to minimize those errors separately. Besides, we show each phase achieves significant performance gains in a complementary manner. We evaluate our method on challenging datasets including CIFAR-10, CIFAR-100, and ImageNet datasets. The experimental results show the proposed method achieves the state-of-the-art in terms of both accuracy and latency with promising energy preservation compared to ANNs. For instance, our method achieves an accuracy of 73.20% on CIFAR-100 in only 2 time steps with 15.7$\times$ less energy consumption.

The growing body of research shows how to replace classical partial differential equation (PDE) integrators with neural networks. The popular strategy is to generate the input-output pairs with a PDE solver, train the neural network in the regression setting, and use the trained model as a cheap surrogate for the solver. The bottleneck in this scheme is the number of expensive queries of a PDE solver needed to generate the dataset. To alleviate the problem, we propose a computationally cheap augmentation strategy based on general covariance and simple random coordinate transformations. Our approach relies on the fact that physical laws are independent of the coordinate choice, so the change in the coordinate system preserves the type of a parametric PDE and only changes PDE's data (e.g., initial conditions, diffusion coefficient). For tried neural networks and partial differential equations, proposed augmentation improves test error by 23% on average. The worst observed result is a 17% increase in test error for multilayer perceptron, and the best case is a 80% decrease for dilated residual network.

We study nonlinear optimization problems with a stochastic objective and deterministic equality and inequality constraints, which emerge in numerous applications including finance, manufacturing, power systems and, recently, deep neural networks. We propose an active-set stochastic sequential quadratic programming (StoSQP) algorithm that utilizes a differentiable exact augmented Lagrangian as the merit function. The algorithm adaptively selects the penalty parameters of the augmented Lagrangian and performs a stochastic line search to decide the stepsize. The global convergence is established: for any initialization, the KKT residuals converge to zero almost surely. Our algorithm and analysis further develop the prior work of Na et al., (2022). Specifically, we allow nonlinear inequality constraints without requiring the strict complementary condition; refine some of the designs in Na et al., (2022) such as the feasibility error condition and the monotonically increasing sample size; strengthen the global convergence guarantee; and improve the sample complexity on the objective Hessian. We demonstrate the performance of the designed algorithm on a subset of nonlinear problems collected in CUTEst test set and on constrained logistic regression problems.

Local search is an effective method for solving large-scale combinatorial optimization problems, and it has made remarkable progress in recent years through several subtle mechanisms. In this paper, we found two ways to improve the local search algorithms in solving Pseudo-Boolean Optimization(PBO): Firstly, some of those mechanisms such as unit propagation are merely used in solving MaxSAT before, which can be generalized to solve PBO as well; Secondly, the existing local search algorithms utilize the heuristic on variables, so-called score, to mainly guide the search. We attempt to gain more insights into the clause, as it plays the role of a middleman who builds a bridge between variables and the given formula. Hence, we first extended the combination of unit propagation-based decimation algorithm to PBO problem, giving a further generalized definition of unit clause for PBO problem, and apply it to the existing solver LS-PBO for constructing an initial assignment; then, we introduced a new heuristic on clauses, dubbed care, to set a higher priority for the clauses that are less satisfied in current iterations. Experiments on three real-world application benchmarks including minimum-width confidence band, wireless sensor network optimization, and seating arrangement problems show that our algorithm DeciLS-PBO has a promising performance compared to the state-of-the-art algorithms.

Modeling the microstructure evolution of a material embedded in a device often involves integral boundary conditions. Here we propose a modified Nitsche's method to solve the Poisson equation with an integral boundary condition, which is coupled to phase-field equations of the microstructure evolution of a strongly correlated material undergoing metal-insulator transitions. Our numerical experiments demonstrate that the proposed method achieves optimal convergence rate while the rate of convergence of the conventional Lagrange multiplier method is not optimal. Furthermore, the linear system derived from the modified Nitsche's method can be solved by an iterative solver with algebraic multigrid preconditioning. The modified Nitsche's method can be applied to other physical boundary conditions mathematically similar to this electric integral boundary condition.

The geometric optimisation of crystal structures is a procedure widely used in Chemistry that changes the geometrical placement of the particles inside a structure. It is called structural relaxation and constitutes a local minimization problem with a non-convex objective function whose domain complexity increases along with the number of particles involved. In this work we study the performance of the two most popular first order optimisation methods, Gradient Descent and Conjugate Gradient, in structural relaxation. The respective pseudocodes can be found in Section 6. Although frequently employed, there is a lack of their study in this context from an algorithmic point of view. In order to accurately define the problem, we provide a thorough derivation of all necessary formulae related to the crystal structure energy function and the function's differentiation. We run each algorithm in combination with a constant step size, which provides a benchmark for the methods' analysis and direct comparison. We also design dynamic step size rules and study how these improve the two algorithms' performance. Our results show that there is a trade-off between convergence rate and the possibility of an experiment to succeed, hence we construct a function to assign utility to each method based on our respective preference. The function is built according to a recently introduced model of preference indication concerning algorithms with deadline and their run time. Finally, building on all our insights from the experimental results, we provide algorithmic recipes that best correspond to each of the presented preferences and select one recipe as the optimal for equally weighted preferences.

北京阿比特科技有限公司