亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

It is well known that the traditional Jensen inequality is proved by lower bounding the given convex function, $f(x)$, by the tangential affine function that passes through the point $(E\{X\},f(E\{X\}))$, where $E\{X\}$ is the expectation of the random variable $X$. While this tangential affine function yields the tightest lower bound among all lower bounds induced by affine functions that are tangential to $f$, it turns out that when the function $f$ is just part of a more complicated expression whose expectation is to be bounded, the tightest lower bound might belong to a tangential affine function that passes through a point different than $(E\{X\},f(E\{X\}))$. In this paper, we take advantage of this observation, by optimizing the point of tangency with regard to the specific given expression, in a variety of cases, and thereby derive several families of inequalities, henceforth referred to as ``Jensen-like'' inequalities, which are new to the best knowledge of the author. The degree of tightness and the potential usefulness of these inequalities is demonstrated in several application examples related to information theory.

相關內容

The Non-dominated Sorting Genetic Algorithm II (NSGA-II) is the most prominent multi-objective evolutionary algorithm for real-world applications. While it performs evidently well on bi-objective optimization problems, empirical studies suggest that it is less effective when applied to problems with more than two objectives. A recent mathematical runtime analysis confirmed this observation by proving the NGSA-II for an exponential number of iterations misses a constant factor of the Pareto front of the simple 3-objective OneMinMax problem. In this work, we provide the first mathematical runtime analysis of the NSGA-III, a refinement of the NSGA-II aimed at better handling more than two objectives. We prove that the NSGA-III with sufficiently many reference points -- a small constant factor more than the size of the Pareto front, as suggested for this algorithm -- computes the complete Pareto front of the 3-objective OneMinMax benchmark in an expected number of O(n log n) iterations. This result holds for all population sizes (that are at least the size of the Pareto front). It shows a drastic advantage of the NSGA-III over the NSGA-II on this benchmark. The mathematical arguments used here and in previous work on the NSGA-II suggest that similar findings are likely for other benchmarks with three or more objectives.

In many complex systems, whether biological or artificial, the thermodynamic costs of communication among their components are large. These systems also tend to split information transmitted between any two components across multiple channels. A common hypothesis is that such inverse multiplexing strategies reduce total thermodynamic costs. So far, however, there have been no physics-based results supporting this hypothesis. This gap existed partially because we have lacked a theoretical framework that addresses the interplay of thermodynamics and information in off-equilibrium systems at any spatiotemporal scale. Here we present the first study that rigorously combines such a framework, stochastic thermodynamics, with Shannon information theory. We develop a minimal model that captures the fundamental features common to a wide variety of communication systems. We find that the thermodynamic cost in this model is a convex function of the channel capacity, the canonical measure of the communication capability of a channel. We also find that this function is not always monotonic, in contrast to previous results not derived from first principles physics. These results clarify when and how to split a single communication stream across multiple channels. In particular, we present Pareto fronts that reveal the trade-off between thermodynamic costs and channel capacity when inverse multiplexing. Due to the generality of our model, our findings could help explain empirical observations of how thermodynamic costs of information transmission make inverse multiplexing energetically favorable in many real-world communication systems.

Nowadays, more and more problems are dealing with data with one infinite continuous dimension: functional data. In this paper, we introduce the funLOCI algorithm which allows to identify functional local clusters or functional loci, i.e., subsets/groups of functions exhibiting similar behaviour across the same continuous subset of the domain. The definition of functional local clusters leverages ideas from multivariate and functional clustering and biclustering and it is based on an additive model which takes into account the shape of the curves. funLOCI is a three-step algorithm based on divisive hierarchical clustering. The use of dendrograms allows to visualize and to guide the searching procedure and the cutting thresholds selection. To deal with the large quantity of local clusters, an extra step is implemented to reduce the number of results to the minimum.

A pure quantum state of $n$ parties associated with the Hilbert space $\CC^{d_1}\otimes \CC^{d_2}\otimes\cdots\otimes \CC^{d_n}$ is called $k$-uniform if all the reductions to $k$-parties are maximally mixed. The $n$ partite system is called homogenous if the local dimension $d_1=d_2=\cdots=d_n$, while it is called heterogeneous if the local dimension are not all equal. $k$-uniform sates play an important role in quantum information theory. There are many progress in characterizing and constructing $k$-uniform states in homogeneous systems. However, the study of entanglement for heterogeneous systems is much more challenging than that for the homogeneous case. There are very few results known for the $k$-uniform states in heterogeneous systems for $k>3$. We present two general methods to construct $k$-uniform states in the heterogeneous systems for general $k$. The first construction is derived from the error correcting codes by establishing a connection between irredundant mixed orthogonal arrays and error correcting codes. We can produce many new $k$-uniform states such that the local dimension of each subsystem can be a prime power. The second construction is derived from a matrix $H$ meeting the condition that $H_{A\times \bar{A}}+H^T_{\bar{A}\times A}$ has full rank for any row index set $A$ of size $k$. These matrix construction can provide more flexible choices for the local dimensions, i.e., the local dimensions can be any integer (not necessarily prime power) subject to some constraints. Our constructions imply that for any positive integer $k$, one can construct $k$-uniform states of a heterogeneous system in many different Hilbert spaces.

Adversarial team games model multiplayer strategic interactions in which a team of identically-interested players is competing against an adversarial player in a zero-sum game. Such games capture many well-studied settings in game theory, such as congestion games, but go well-beyond to environments wherein the cooperation of one team -- in the absence of explicit communication -- is obstructed by competing entities; the latter setting remains poorly understood despite its numerous applications. Since the seminal work of Von Stengel and Koller (GEB `97), different solution concepts have received attention from an algorithmic standpoint. Yet, the complexity of the standard Nash equilibrium has remained open. In this paper, we settle this question by showing that computing a Nash equilibrium in adversarial team games belongs to the class continuous local search (CLS), thereby establishing CLS-completeness by virtue of the recent CLS-hardness result of Rubinstein and Babichenko (STOC `21) in potential games. To do so, we leverage linear programming duality to prove that any $\epsilon$-approximate stationary strategy for the team can be extended in polynomial time to an $O(\epsilon)$-approximate Nash equilibrium, where the $O(\cdot)$ notation suppresses polynomial factors in the description of the game. As a consequence, we show that the Moreau envelop of a suitable best response function acts as a potential under certain natural gradient-based dynamics.

We consider a facility location game in which $n$ agents reside at known locations on a path, and $k$ heterogeneous facilities are to be constructed on the path. Each agent is adversely affected by some subset of the facilities, and is unaffected by the others. We design two classes of mechanisms for choosing the facility locations given the reported agent preferences: utilitarian mechanisms that strive to maximize social welfare (i.e., to be efficient), and egalitarian mechanisms that strive to maximize the minimum welfare. For the utilitarian objective, we present a weakly group-strategyproof efficient mechanism for up to three facilities, we give strongly group-strategyproof mechanisms that achieve approximation ratios of $5/3$ and $2$ for $k=1$ and $k > 1$, respectively, and we prove that no strongly group-strategyproof mechanism achieves an approximation ratio less than $5/3$ for the case of a single facility. For the egalitarian objective, we present a strategyproof egalitarian mechanism for arbitrary $k$, and we prove that no weakly group-strategyproof mechanism achieves a $o(\sqrt{n})$ approximation ratio for two facilities. We extend our egalitarian results to the case where the agents are located on a cycle, and we extend our first egalitarian result to the case where the agents are located in the unit square.

The electric vehicle sharing problem (EVSP) arises from the planning and operation of one-way electric car-sharing systems. It aims to maximize the total rental time of a fleet of electric vehicles while ensuring that all the demands of the customer are fulfilled. In this paper, we expand the knowledge on the complexity of the EVSP by showing that it is NP-hard to approximate it to within a factor of $n^{1-\epsilon}$ in polynomial time, for any $\epsilon > 0$, where $n$ denotes the number of customers, unless P = NP. In addition, we also show that the problem does not have a monotone structure, which can be detrimental to the development of heuristics employing constructive strategies. Moreover, we propose a novel approach for the modeling of the EVSP based on energy flows in the network. Based on the new model, we propose a relax-and-fix strategy and an exact algorithm that uses a warm-start solution obtained from our heuristic approach. We report computational results comparing our formulation with the best-performing formulation in the literature. The results show that our formulation outperforms the previous one concerning the number of optimal solutions obtained, optimality gaps, and computational times. Previously, $32.7\%$ of the instances remained unsolved (within a time limit of one hour) by the best-performing formulation in the literature, while our formulation obtained optimal solutions for all instances. To stress our approaches, two more challenging new sets of instances were generated, for which we were able to solve $49.5\%$ of the instances, with an average optimality gap of $2.91\%$ for those not solved optimally.

The goal of multi-objective optimization is to identify a collection of points which describe the best possible trade-offs between the multiple objectives. In order to solve this vector-valued optimization problem, practitioners often appeal to the use of scalarization functions in order to transform the multi-objective problem into a collection of single-objective problems. This set of scalarized problems can then be solved using traditional single-objective optimization techniques. In this work, we formalise this convention into a general mathematical framework. We show how this strategy effectively recasts the original multi-objective optimization problem into a single-objective optimization problem defined over sets. An appropriate class of objective functions for this new problem is the R2 utility function, which is defined as a weighted integral over the scalarized optimization problems. We show that this utility function is a monotone and submodular set function, which can be optimised effectively using greedy optimization algorithms. We analyse the performance of these greedy algorithms both theoretically and empirically. Our analysis largely focusses on Bayesian optimization, which is a popular probabilistic framework for black-box optimization.

Game-theoretic interactions with AI agents could differ from traditional human-human interactions in various ways. One such difference is that it may be possible to simulate an AI agent (for example because its source code is known), which allows others to accurately predict the agent's actions. This could lower the bar for trust and cooperation. In this paper, we formalize games in which one player can simulate another at a cost. We first derive some basic properties of such games and then prove a number of results for them, including: (1) introducing simulation into generic-payoff normal-form games makes them easier to solve; (2) if the only obstacle to cooperation is a lack of trust in the possibly-simulated agent, simulation enables equilibria that improve the outcome for both agents; and however (3) there are settings where introducing simulation results in strictly worse outcomes for both players.

This book develops an effective theory approach to understanding deep neural networks of practical relevance. Beginning from a first-principles component-level picture of networks, we explain how to determine an accurate description of the output of trained networks by solving layer-to-layer iteration equations and nonlinear learning dynamics. A main result is that the predictions of networks are described by nearly-Gaussian distributions, with the depth-to-width aspect ratio of the network controlling the deviations from the infinite-width Gaussian description. We explain how these effectively-deep networks learn nontrivial representations from training and more broadly analyze the mechanism of representation learning for nonlinear models. From a nearly-kernel-methods perspective, we find that the dependence of such models' predictions on the underlying learning algorithm can be expressed in a simple and universal way. To obtain these results, we develop the notion of representation group flow (RG flow) to characterize the propagation of signals through the network. By tuning networks to criticality, we give a practical solution to the exploding and vanishing gradient problem. We further explain how RG flow leads to near-universal behavior and lets us categorize networks built from different activation functions into universality classes. Altogether, we show that the depth-to-width ratio governs the effective model complexity of the ensemble of trained networks. By using information-theoretic techniques, we estimate the optimal aspect ratio at which we expect the network to be practically most useful and show how residual connections can be used to push this scale to arbitrary depths. With these tools, we can learn in detail about the inductive bias of architectures, hyperparameters, and optimizers.

北京阿比特科技有限公司