亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We propose and analyze an augmented mixed finite element method for the pseudostress-velocity formulation of the stationary convective Brinkman-Forchheimer problem in $\mathrm{R}^d$, $d\in \{2,3\}$. Since the convective and Forchheimer terms forces the velocity to live in a smaller space than usual, we augment the variational formulation with suitable Galerkin type terms. The resulting augmented scheme is written equivalently as a fixed point equation, so that the well-known Schauder and Banach theorems, combined with the Lax-Milgram theorem, allow to prove the unique solvability of the continuous problem. The finite element discretization involves Raviart-Thomas spaces of order $k\geq 0$ for the pseudostress tensor and continuous piecewise polynomials of degree $\le k + 1$ for the velocity. Stability, convergence, and a priori error estimates for the associated Galerkin scheme are obtained. In addition, we derive two reliable and efficient residual-based a posteriori error estimators for this problem on arbitrary polygonal and polyhedral regions. The reliability of the proposed estimators draws mainly upon the uniform ellipticity of the form involved, a suitable assumption on the data, a stable Helmholtz decomposition, and the local approximation properties of the Cl\'ement and Raviart-Thomas operators. In turn, inverse inequalities, the localization technique based on bubble functions, and known results from previous works, are the main tools yielding the efficiency estimate. Finally, some numerical examples illustrating the performance of the mixed finite element method, confirming the theoretical rate of convergence and the properties of the estimators, and showing the behaviour of the associated adaptive algorithms, are reported. In particular, the case of flow through a $2$D porous media with fracture networks is considered.

相關內容

This paper proposes a new approach to identifying the effective cointegration rank in high-dimensional unit-root (HDUR) time series from a prediction perspective using reduced-rank regression. For a HDUR process $\mathbf{x}_t\in \mathbb{R}^N$ and a stationary series $\mathbf{y}_t\in \mathbb{R}^p$ of interest, our goal is to predict future values of $\mathbf{y}_t$ using $\mathbf{x}_t$ and lagged values of $\mathbf{y}_t$. The proposed framework consists of a two-step estimation procedure. First, the Principal Component Analysis is used to identify all cointegrating vectors of $\mathbf{x}_t$. Second, the co-integrated stationary series are used as regressors, together with some lagged variables of $\mathbf{y}_t$, to predict $\mathbf{y}_t$. The estimated reduced rank is then defined as the effective cointegration rank of $\mathbf{x}_t$. Under the scenario that the autoregressive coefficient matrices are sparse (or of low-rank), we apply the Least Absolute Shrinkage and Selection Operator (or the reduced-rank techniques) to estimate the autoregressive coefficients when the dimension involved is high. Theoretical properties of the estimators are established under the assumptions that the dimensions $p$ and $N$ and the sample size $T \to \infty$. Both simulated and real examples are used to illustrate the proposed framework, and the empirical application suggests that the proposed procedure fares well in predicting stock returns.

In many recommender systems and search problems, presenting a well balanced set of results can be an important goal in addition to serving highly relevant content. For example, in a movie recommendation system, it may be helpful to achieve a certain balance of different genres, likewise, it may be important to balance between highly popular versus highly personalized shows. Such balances could be thought across many categories and may be required for enhanced user experience, business considerations, fairness objectives etc. In this paper, we consider the problem of calibrating with respect to any given categories over items. We propose a way to balance a trade-off between relevance and calibration via a Linear Programming optimization problem where we learn a doubly stochastic matrix to achieve optimal balance in expectation. We then realize the learned policy using the Birkhoff-von Neumann decomposition of a doubly stochastic matrix. Several optimizations are considered over the proposed basic approach to make it fast. The experiments show that the proposed formulation can achieve a much better trade-off compared to many other baselines. This paper does not prescribe the exact categories to calibrate over (such as genres) universally for applications. This is likely dependent on the particular task or business objective. The main contribution of the paper is that it proposes a framework that can be applied to a variety of problems and demonstrates the efficacy of the proposed method using a few use-cases.

In device-to-device (D2D) coded caching problems, it is possible that not all users will make file requests in the delivery phase. Hence, we propose a new D2D centralized coded caching problem, named the 3-user D2D coded caching with two random requesters and one sender (2RR1S), where in the delivery phase, any two of the three users will make file requests, and the user that does not make any file request is the designated sender. We find the optimal caching and delivery scheme, denoted as the 2RRIS scheme, for any number of files N by proving matching converse and achievability results. It is shown that coded cache placement is needed to achieve the optimal performance. Furthermore, the optimal rate-memory tradeoff has a uniform expression for N>=4 and different expressions for N=2 and 3. To examine the usefulness of the proposed model and scheme, we adapt the 2RR1S scheme to two scenarios. The first one is the 3-user D2D coded caching model proposed by Ji et al. By characterizing the optimal rate-memory tradeoff for the 3-user D2D coded caching when N=2, which was previously unknown, we show that the adapted 2RR1S scheme is in fact optimal for the 3-user D2D coded caching problem when N=2 and the cache size is medium. The benefit comes from coded cache placement which is missing from existing D2D coded caching schemes. The second scenario is where in the delivery phase, each user makes a file request randomly and independently with the same probability p. We call this model the request-random D2D coded caching problem. Adapting the 2RR1S scheme to this scenario, we show the superiority of our adapted scheme over other existing D2D coded caching schemes for medium to large cache size.

This paper introduces a new accurate model for periodic fractional optimal control problems (PFOCPs) using Riemann-Liouville (RL) and Caputo fractional derivatives (FDs) with sliding fixed memory lengths. The paper also provides a novel numerical method for solving PFOCPs using Fourier and Gegenbauer pseudospectral methods. By employing Fourier collocation at equally spaced nodes and Fourier and Gegenbauer quadratures, the method transforms the PFOCP into a simple constrained nonlinear programming problem (NLP) that can be treated easily using standard NLP solvers. We propose a new transformation that largely simplifies the problem of calculating the periodic FDs of periodic functions to the problem of evaluating the integral of the first derivatives of their trigonometric Lagrange interpolating polynomials, which can be treated accurately and efficiently using Gegenbauer quadratures. We introduce the notion of the {\alpha}th-order fractional integration matrix with index L based on Fourier and Gegenbauer pseudospectral approximations, which proves to be very effective in computing periodic FDs. We also provide a rigorous priori error analysis to predict the quality of the Fourier-Gegenbauer-based approximations to FDs. The numerical results of the benchmark PFOCP demonstrate the performance of the proposed pseudospectral method.

Polarization-adjusted convolutional (PAC) codes can approach the theoretical bound for block error rate (BLER) performance at short-to-medium codeword length. PAC codes have excellent BLER performance using Monte Carlo (MC) rate-profiles and Weighted Sum (WS) rate-profiles, but the BLER performances of the constructed codes still fall away from the dispersion bound at high signal-to-noise ratios (SNR). This paper proposes a List-Search (LS) construction method for PAC codes, which considers the influence of weight spectrum on BLER performance and the condition that sequence decoding for PAC codes having a finite mean computational complexity. The proposed construction method using LS can reduce the number of minimum weight codewords of PAC codes. The BLER performance of the constructed codes is better than that of the constructed codes using MC rate-profiles or WS rate-profiles, and can approach the dispersion bound at high SNR. Moreover, the BLER performance of successive cancellation list (SCL) decoding PAC codes using LS rate-profiles can approach the theoretical bound, but SCL decoding requires a large number of sorting operations. To reduce the number of sorting operations, a path-splitting critical sets (PSCS) construction method is proposed. The PSCS obtained by this method are the information bits subset that have the greatest influence on the number of minimum weight codewords. The simulation results show that this method can significantly reduce the number of sorting operations during SCL-type decoding.

We present a new residual-type energy-norm a posteriori error analysis for interior penalty discontinuous Galerkin (dG) methods for linear elliptic problems. The new error bounds are also applicable to dG methods on meshes consisting of elements with very general polygonal/polyhedral shapes. The case of simplicial and/or box-type elements is included in the analysis as a special case. In particular, for the upper bounds, an arbitrary number of very small faces are allowed on each polygonal/polyhedral element, as long as certain mild shape regularity assumptions are satisfied. As a corollary, the present analysis generalizes known a posteriori error bounds for dG methods, allowing in particular for meshes with an arbitrary number of irregular hanging nodes per element. The proof hinges on a new conforming recovery strategy in conjunction with a Helmholtz decomposition formula. The resulting a posteriori error bound involves jumps on the tangential derivatives along elemental faces. Local lower bounds are also proven for a number of practical cases. Numerical experiments are also presented, highlighting the practical value of the derived a posteriori error bounds as error estimators.

On Bakhvalov-type mesh, uniform convergence analysis of finite element method for a 2-D singularly perturbed convection-diffusion problem with exponential layers is still an open problem. Previous attempts have been unsuccessful. The primary challenges are the width of the mesh subdomain in the layer adjacent to the transition point, the restriction of the Dirichlet boundary condition, and the structure of exponential layers. To address these challenges, a novel analysis technique is introduced for the first time, which takes full advantage of the characteristics of interpolation and the connection between the smooth function and the layer function on the boundary. Utilizing this technique in conjunction with a new interpolation featuring a simple structure, uniform convergence of optimal order k+1 under an energy norm can be proven for finite element method of any order k. Numerical experiments confirm our theoretical results.

In Lipschitz domains, we study a Darcy-Forchheimer problem coupled with a singular heat equation by a nonlinear forcing term depending on the temperature. By singular we mean that the heat source corresponds to a Dirac measure. We establish the existence of solutions for a model that allows a diffusion coefficient in the heat equation depending on the temperature. For such a model, we also propose a finite element discretization scheme and provide an a priori convergence analysis. In the case that the aforementioned diffusion coefficient is constant, we devise an a posteriori error estimator and investigate reliability and efficiency properties. We conclude by devising an adaptive loop based on the proposed error estimator and presenting numerical experiments.

We use the lens of weak signal asymptotics to study a class of sequentially randomized experiments, including those that arise in solving multi-armed bandit problems. In an experiment with $n$ time steps, we let the mean reward gaps between actions scale to the order $1/\sqrt{n}$ so as to preserve the difficulty of the learning task as $n$ grows. In this regime, we show that the sample paths of a class of sequentially randomized experiments -- adapted to this scaling regime and with arm selection probabilities that vary continuously with state -- converge weakly to a diffusion limit, given as the solution to a stochastic differential equation. The diffusion limit enables us to derive refined, instance-specific characterization of stochastic dynamics, and to obtain several insights on the regret and belief evolution of a number of sequential experiments including Thompson sampling (but not UCB, which does not satisfy our continuity assumption). We show that all sequential experiments whose randomization probabilities have a Lipschitz-continuous dependence on the observed data suffer from sub-optimal regret performance when the reward gaps are relatively large. Conversely, we find that a version of Thompson sampling with an asymptotically uninformative prior variance achieves near-optimal instance-specific regret scaling, including with large reward gaps, but these good regret properties come at the cost of highly unstable posterior beliefs.

The paper analyses properties of a large class of "path-based" Data Envelopment Analysis models through a unifying general scheme. The scheme includes the well-known oriented radial models, the hyperbolic distance function model, the directional distance function models, and even permits their generalisations. The modelling is not constrained to non-negative data and is flexible enough to accommodate variants of standard models over arbitrary data. Mathematical tools developed in the paper allow systematic analysis of the models from the point of view of ten desirable properties. It is shown that some of the properties are satisfied (resp., fail) for all models in the general scheme, while others have a more nuanced behaviour and must be assessed individually in each model. Our results can help researchers and practitioners navigate among the different models and apply the models to mixed data.

北京阿比特科技有限公司