亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We analyse an energy minimisation problem recently proposed for modelling smectic-A liquid crystals. The optimality conditions give a coupled nonlinear system of partial differential equations, with a second-order equation for the tensor-valued nematic order parameter $\mathbf{Q}$ and a fourth-order equation for the scalar-valued smectic density variation $u$. Our two main results are a proof of the existence of solutions to the minimisation problem, and the derivation of a priori error estimates for its discretisation using the $\mathcal{C}^0$ interior penalty method. More specifically, optimal rates in the $H^1$ and $L^2$ norms are obtained for $\mathbf{Q}$, while optimal rates in a mesh-dependent norm and $L^2$ norm are obtained for $u$. Numerical experiments confirm the rates of convergence.

相關內容

The performance of Simultaneous Wireless Information and Power Transfer (SWIPT) is mainly constrained by the received Radio-Frequency (RF) signal strength. To tackle this problem, we introduce an Intelligent Reflecting Surface (IRS) to compensate the propagation loss and boost the transmission efficiency. This paper proposes a novel IRS-aided SWIPT system where a multi-carrier multi-antenna Access Point (AP) transmits information and power simultaneously, with the assist of an IRS, to a single-antenna User Equipment (UE) employing practical receiving schemes. Considering harvester nonlinearity, we characterize the achievable Rate-Energy (R-E) region through a joint optimization of waveform, active and passive beamforming based on the Channel State Information at the Transmitter (CSIT). This problem is solved by the Block Coordinate Descent (BCD) method, where we obtain the active precoder in closed form, the passive beamforming by the Successive Convex Approximation (SCA) approach, and the waveform amplitude by the Geometric Programming (GP) technique. To facilitate practical implementation, we also propose a low-complexity design based on closed-form adaptive waveform schemes. Simulation results demonstrate the proposed algorithms bring considerable R-E gains with robustness to CSIT inaccuracy and finite IRS states, and emphasize the importance of modeling harvester nonlinearity in the IRS-aided SWIPT design.

Device-to-device (D2D) communications is expected to be a critical enabler of distributed computing in edge networks at scale. A key challenge in providing this capability is the requirement for judicious management of the heterogeneous communication and computation resources that exist at the edge to meet processing needs. In this paper, we develop an optimization methodology that considers the network topology jointly with device and network resource allocation to minimize total D2D overhead, which we quantify in terms of time and energy required for task processing. Variables in our model include task assignment, CPU allocation, subchannel selection, and beamforming design for multiple-input multiple-output (MIMO) wireless devices. We propose two methods to solve the resulting non-convex mixed integer program: semi-exhaustive search optimization, which represents a "best-effort" at obtaining the optimal solution, and efficient alternate optimization, which is more computationally efficient. As a component of these two methods, we develop a novel coordinated beamforming algorithm which we show obtains the optimal beamformer for a common receiver characteristic. Through numerical experiments, we find that our methodology yields substantial improvements in network overhead compared with local computation and partially optimized methods, which validates our joint optimization approach. Further, we find that the efficient alternate optimization scales well with the number of nodes, and thus can be a practical solution for D2D computing in large networks.

Given an undirected graph $G=(V,E)$, the longest induced path problem (LIPP) consists of obtaining a maximum cardinality subset $W\subseteq V$ such that $W$ induces a simple path in $G$. In this paper, we propose two new formulations with an exponential number of constraints for the problem, together with effective branch-and-cut procedures for its solution. While the first formulation (cec) is based on constraints that explicitly eliminate cycles, the second one (cut) ensures connectivity via cutset constraints. We compare, both theoretically and experimentally, the newly proposed approaches with a state-of-the-art formulation recently proposed in the literature. More specifically, we show that the polyhedra defined by formulation cut and that of the formulation available in the literature are the same. Besides, we show that these two formulations are stronger in theory than cec. We also propose a new branch-and-cut procedure using the new formulations. Computational experiments show that the newly proposed formulation cec, although less strong from a theoretical point of view, is the best performing approach as it can solve all but one of the 1065 benchmark instances used in the literature within the given time limit. In addition, our newly proposed approaches outperform the state-of-the-art formulation when it comes to the median times to solve the instances to optimality. Furthermore, we perform extended computational experiments considering more challenging and hard-to-solve larger instances and evaluate the impacts on the results when offering initial feasible solutions (warm starts) to the formulations.

We analyze the orthogonal greedy algorithm when applied to dictionaries $\mathbb{D}$ whose convex hull has small entropy. We show that if the metric entropy of the convex hull of $\mathbb{D}$ decays at a rate of $O(n^{-\frac{1}{2}-\alpha})$ for $\alpha > 0$, then the orthogonal greedy algorithm converges at the same rate on the variation space of $\mathbb{D}$. This improves upon the well-known $O(n^{-\frac{1}{2}})$ convergence rate of the orthogonal greedy algorithm in many cases, most notably for dictionaries corresponding to shallow neural networks. These results hold under no additional assumptions on the dictionary beyond the decay rate of the entropy of its convex hull. In addition, they are robust to noise in the target function and can be extended to convergence rates on the interpolation spaces of the variation norm. Finally, we show that these improved rates are sharp and prove a negative result showing that the iterates generated by the orthogonal greedy algorithm cannot in general be bounded in the variation norm of $\mathbb{D}$.

The conditions for a Runge--Kutta method to be of order $p$ with $p\ge 5$ for a scalar non-autonomous problem are a proper subset of the order conditions for a vector problem. Nevertheless, Runge--Kutta methods that were derived historically only for scalar problems happened to be of the same order for vector problems. We relate the order conditions for scalar problems to factorisations of the Runge--Kutta trees into "atomic stumps" and enumerate those conditions up to $p=20$. Using a special search procedure over unsatisfied order conditions, new Runge--Kutta methods of "ambiguous orders" five and six are constructed. These are used to verify the validity of the results.

Estimating causal effects from randomized experiments is central to clinical research. Reducing the statistical uncertainty in these analyses is an important objective for statisticians. Registries, prior trials, and health records constitute a growing compendium of historical data on patients under standard-of-care that may be exploitable to this end. However, most methods for historical borrowing achieve reductions in variance by sacrificing strict type-I error rate control. Here, we propose a use of historical data that exploits linear covariate adjustment to improve the efficiency of trial analyses without incurring bias. Specifically, we train a prognostic model on the historical data, then estimate the treatment effect using a linear regression while adjusting for the trial subjects' predicted outcomes (their prognostic scores). We prove that, under certain conditions, this prognostic covariate adjustment procedure attains the minimum variance possible among a large class of estimators. When those conditions are not met, prognostic covariate adjustment is still more efficient than raw covariate adjustment and the gain in efficiency is proportional to a measure of the predictive accuracy of the prognostic model above and beyond the linear relationship with the raw covariates. We demonstrate the approach using simulations and a reanalysis of an Alzheimer's Disease clinical trial and observe meaningful reductions in mean-squared error and the estimated variance. Lastly, we provide a simplified formula for asymptotic variance that enables power calculations that account for these gains. Sample size reductions between 10% and 30% are attainable when using prognostic models that explain a clinically realistic percentage of the outcome variance.

This PhD thesis contains several contributions to the field of statistical causal modeling. Statistical causal models are statistical models embedded with causal assumptions that allow for the inference and reasoning about the behavior of stochastic systems affected by external manipulation (interventions). This thesis contributes to the research areas concerning the estimation of causal effects, causal structure learning, and distributionally robust (out-of-distribution generalizing) prediction methods. We present novel and consistent linear and non-linear causal effects estimators in instrumental variable settings that employ data-dependent mean squared prediction error regularization. Our proposed estimators show, in certain settings, mean squared error improvements compared to both canonical and state-of-the-art estimators. We show that recent research on distributionally robust prediction methods has connections to well-studied estimators from econometrics. This connection leads us to prove that general K-class estimators possess distributional robustness properties. We, furthermore, propose a general framework for distributional robustness with respect to intervention-induced distributions. In this framework, we derive sufficient conditions for the identifiability of distributionally robust prediction methods and present impossibility results that show the necessity of several of these conditions. We present a new structure learning method applicable in additive noise models with directed trees as causal graphs. We prove consistency in a vanishing identifiability setup and provide a method for testing substructure hypotheses with asymptotic family-wise error control that remains valid post-selection. Finally, we present heuristic ideas for learning summary graphs of nonlinear time-series models.

The Variational Auto-Encoder (VAE) is one of the most used unsupervised machine learning models. But although the default choice of a Gaussian distribution for both the prior and posterior represents a mathematically convenient distribution often leading to competitive results, we show that this parameterization fails to model data with a latent hyperspherical structure. To address this issue we propose using a von Mises-Fisher (vMF) distribution instead, leading to a hyperspherical latent space. Through a series of experiments we show how such a hyperspherical VAE, or $\mathcal{S}$-VAE, is more suitable for capturing data with a hyperspherical latent structure, while outperforming a normal, $\mathcal{N}$-VAE, in low dimensions on other data types.

We propose a new method of estimation in topic models, that is not a variation on the existing simplex finding algorithms, and that estimates the number of topics K from the observed data. We derive new finite sample minimax lower bounds for the estimation of A, as well as new upper bounds for our proposed estimator. We describe the scenarios where our estimator is minimax adaptive. Our finite sample analysis is valid for any number of documents (n), individual document length (N_i), dictionary size (p) and number of topics (K), and both p and K are allowed to increase with n, a situation not handled well by previous analyses. We complement our theoretical results with a detailed simulation study. We illustrate that the new algorithm is faster and more accurate than the current ones, although we start out with a computational and theoretical disadvantage of not knowing the correct number of topics K, while we provide the competing methods with the correct value in our simulations.

In this paper we study the frequentist convergence rate for the Latent Dirichlet Allocation (Blei et al., 2003) topic models. We show that the maximum likelihood estimator converges to one of the finitely many equivalent parameters in Wasserstein's distance metric at a rate of $n^{-1/4}$ without assuming separability or non-degeneracy of the underlying topics and/or the existence of more than three words per document, thus generalizing the previous works of Anandkumar et al. (2012, 2014) from an information-theoretical perspective. We also show that the $n^{-1/4}$ convergence rate is optimal in the worst case.

北京阿比特科技有限公司