亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The broad objective of this paper is to initiate through a mathematical model, the study of causes of wage inequality and relate it to choices of consumption and the technologies of production in an economy. The paper constructs a simple Heterodox Model of a closed economy, in which the consumption and the production parts are clearly separated and yet coupled through a tatonnement process. The equilibria of this process correspond directly with those of a related Arrow-Debreu model. The formulation allows us to identify the combinatorial data which link parameters of the economic system with its equilibria, in particular, the impact of consumer preferences on wages. The Heterodox model also allows the formulation and explicit construction of the consumer choice game, where individual utilities serve as the strategies with total or relative wages as the pay-offs. We illustrate, through two examples, the mathematical details of the consumer choice game. We show that consumer preferences, expressed through modified utility functions, do indeed percolate through the economy, and influences not only prices but also production and wages. Thus, consumer choice may serve as an effective tool for wage redistribution.

相關內容

Kelly's theorem states that a set of $n$ points affinely spanning $\mathbb{C}^3$ must determine at least one ordinary complex line (a line passing through exactly two of the points). Our main theorem shows that such sets determine at least $3n/2$ ordinary lines, unless the configuration has $n-1$ points in a plane and one point outside the plane (in which case there are at least $n-1$ ordinary lines). In addition, when at most $2n/3$ points are contained in any plane, we prove a theorem giving stronger bounds that take advantage of the existence of lines with 4 and more points (in the spirit of Melchior's and Hirzebruch's inequalities). Furthermore, when the points span 4 or more dimensions, with at most $2n/3$ points contained in any three dimensional affine subspace, we show that there must be a quadratic number of ordinary lines.

Numerous signals in relevant signal processing applications can be modeled as a sum of complex exponentials. Each exponential term entails a particular property of the modeled physical system, and it is possible to define families of signals that are associated with the complex exponentials. In this paper, we formulate a classification problem for this guiding principle and we propose a data processing strategy. In particular, we exploit the information obtained from the analytical model by combining it with data-driven learning techniques. As a result, we obtain a classification strategy that is robust under modeling uncertainties and experimental perturbations. To assess the performance of the new scheme, we test it with experimental data obtained from the scattering response of targets illuminated with an impulse radio ultra-wideband radar.

Changepoint models typically assume the data within each segment are independent and identically distributed conditional on some parameters which change across segments. This construction may be inadequate when data are subject to local correlation patterns, often resulting in many more changepoints fitted than preferable. This article proposes a Bayesian changepoint model which relaxes the assumption of exchangeability within segments. The proposed model supposes data within a segment are $m$-dependent for some unkown $m \geqslant0$ which may vary between segments, resulting in a model suitable for detecting clear discontinuities in data which are subject to different local temporal correlations. The approach is suited to both continuous and discrete data. A novel reversible jump MCMC algorithm is proposed to sample from the model; in particular, a detailed analysis of the parameter space is exploited to build proposals for the orders of dependence. Two applications demonstrate the benefits of the proposed model: computer network monitoring via change detection in count data, and segmentation of financial time series.

We study the optimal batch-regret tradeoff for batch linear contextual bandits. For any batch number $M$, number of actions $K$, time horizon $T$, and dimension $d$, we provide an algorithm and prove its regret guarantee, which, due to technical reasons, features a two-phase expression as the time horizon $T$ grows. We also prove a lower bound theorem that surprisingly shows the optimality of our two-phase regret upper bound (up to logarithmic factors) in the \emph{full range} of the problem parameters, therefore establishing the exact batch-regret tradeoff. Compared to the recent work \citep{ruan2020linear} which showed that $M = O(\log \log T)$ batches suffice to achieve the asymptotically minimax-optimal regret without the batch constraints, our algorithm is simpler and easier for practical implementation. Furthermore, our algorithm achieves the optimal regret for all $T \geq d$, while \citep{ruan2020linear} requires that $T$ greater than an unrealistically large polynomial of $d$. Along our analysis, we also prove a new matrix concentration inequality with dependence on their dynamic upper bounds, which, to the best of our knowledge, is the first of its kind in literature and maybe of independent interest.

We introduce the category of information structures, whose objects are suitable diagrams of measurable sets that encode the possible outputs of a given family of observables and their mutual relationships of refinement; they serve as mathematical models of contextuality in classical and quantum settings. Each information structure can be regarded as a ringed site with trivial topology; the structure ring is generated by the observables themselves and its multiplication corresponds to joint measurement. We extend Baudot and Bennequin's definition of information cohomology to this setting, as a derived functor in the category of modules over the structure ring, and show explicitly that the bar construction gives a projective resolution in that category, recovering in this way the cochain complexes previously considered in the literature. Finally, we study the particular case of a one-parameter family of coefficients made of functions of probability distributions. The only 1-cocycles are Shannon entropy or Tsallis $\alpha$-entropy, depending on the value of the parameter.

In differential Evolution (DE) algorithms, a crossover operation filtering variables to be mutated is employed to search the feasible region flexibly, which leads to its successful applications in a variety of complicated optimization problems. To investigate whether the crossover operator of DE is helpful to performance improvement of evolutionary algorithms (EAs), this paper implements a theoretical analysis for the $(1+1)EA_{C}$ and the $(1+1)EA_{CM}$, two variants of the $(1+1)EA$ that incorporate the binomial crossover operator. Generally, the binomial crossover results in the enhancement of exploration and the dominance of transition matrices under some conditions. As a result, both the $(1+1)EA_{C}$ and the $(1+1)EA_{CM}$ outperform the $(1+1)EA$ on the unimodal OneMax problem, but do not always dominate it on the Deceptive problem. Finally, we perform an exploration analysis by investigating probabilities to transfer from non-optimal statuses to the optimal status of the Deceptive problem, and propose adaptive parameter settings to strengthen the promising function of binomial crossover. It suggests that incorporation of the binomial crossover could be a feasible strategy to improve the performances of EAs.

Branchwidth determines how graphs, and more generally, arbitrary connectivity (basically symmetric and submodular) functions could be decomposed into a tree-like structure by specific cuts. We develop a general framework for designing fixed-parameter tractable (FPT) 2-approximation algorithms for branchwidth of connectivity functions. The first ingredient of our framework is combinatorial. We prove a structural theorem establishing that either a sequence of particular refinement operations could decrease the width of a branch decomposition or that the width of the decomposition is already within a factor of 2 from the optimum. The second ingredient is an efficient implementation of the refinement operations for branch decompositions that support efficient dynamic programming. We present two concrete applications of our general framework. $\bullet$ An algorithm that for a given $n$-vertex graph $G$ and integer $k$ in time $2^{2^{O(k)}} n^2$ either constructs a rank decomposition of $G$ of width at most $2k$ or concludes that the rankwidth of $G$ is more than $k$. It also yields a $(2^{2k+1}-1)$-approximation algorithm for cliquewidth within the same time complexity, which in turn, improves to $f(k)n^2$ the running times of various algorithms on graphs of cliquewidth $k$. Breaking the "cubic barrier" for rankwidth and cliquewidth was an open problem in the area. $\bullet$ An algorithm that for a given $n$-vertex graph $G$ and integer $k$ in time $2^{O(k)} n$ either constructs a branch decomposition of $G$ of width at most $2k$ or concludes that the branchwidth of $G$ is more than $k$. This improves over the 3-approximation that follows from the recent treewidth 2-approximation of Korhonen [FOCS 2021].

A cloud service provider strives to provide a high Quality of Service (QoS) to client jobs. Such jobs vary in computational and Service-Level-Agreement (SLA) obligations, as well as differ with respect to tolerating delays and SLA violations. The job scheduling plays a critical role in servicing cloud demands by allocating appropriate resources to execute client jobs. The response to such jobs is optimized by the cloud provider on a multi-tier cloud computing environment. Typically, the complex and dynamic nature of multi-tier environments incurs difficulties in meeting such demands, because tiers are dependent on each other which in turn makes bottlenecks of a tier shift to escalate in subsequent tiers. However, the optimization process of existing approaches produces single-tier-driven schedules that do not employ the differential impact of SLA violations in executing client jobs. Furthermore, the impact of schedules optimized at the tier level on the performance of schedules formulated in subsequent tiers tends to be ignored, resulting in a less than optimal performance when measured at the multi-tier level. Thus, failing in committing job obligations incurs SLA penalties that often take the form of either financial compensations, or losing future interests and motivations of unsatisfied clients in the service provided. In this paper, a scheduling and allocation approach is proposed to formulate schedules that account for differential impacts of SLA violation penalties and, thus, produce schedules that are optimal in financial performance. A queue virtualization scheme is designed to facilitate the formulation of optimal schedules at the tier and multi-tier levels of the cloud environment. Because the scheduling problem is NP-hard, a biologically inspired approach is proposed to mitigate the complexity of finding optimal schedules.

We show a flow-augmentation algorithm in directed graphs: There exists a polynomial-time algorithm that, given a directed graph $G$, two integers $s,t \in V(G)$, and an integer $k$, adds (randomly) to $G$ a number of arcs such that for every minimal $st$-cut $Z$ in $G$ of size at most $k$, with probability $2^{-\mathrm{poly}(k)}$ the set $Z$ becomes a minimum $st$-cut in the resulting graph. The directed flow-augmentation tool allows us to prove fixed-parameter tractability of a number of problems parameterized by the cardinality of the deletion set, whose parameterized complexity status was repeatedly posed as open problems: (1) Chain SAT, defined by Chitnis, Egri, and Marx [ESA'13, Algorithmica'17], (2) a number of weighted variants of classic directed cut problems, such as Weighted $st$-Cut, Weighted Directed Feedback Vertex Set, or Weighted Almost 2-SAT. By proving that Chain SAT is FPT, we confirm a conjecture of Chitnis, Egri, and Marx that, for any graph $H$, if the List $H$-Coloring problem is polynomial-time solvable, then the corresponding vertex-deletion problem is fixed-parameter tractable.

In this paper, we develop a framework to construct energy-preserving methods for multi-components Hamiltonian systems, combining the exponential integrator and the partitioned averaged vector field method. This leads to numerical schemes with both advantages of long-time stability and excellent behavior for highly oscillatory or stiff problems. Compared to the existing energy-preserving exponential integrators (EP-EI) in practical implementation, our proposed methods are much efficient which can at least be computed by subsystem instead of handling a nonlinear coupling system at a time. Moreover, for most cases, such as the Klein-Gordon-Schr\"{o}dinger equations and the Klein-Gordon-Zakharov equations considered in this paper, the computational cost can be further reduced. Specifically, one part of the derived schemes is totally explicit, and the other is linearly implicit. In addition, we present rigorous proof of conserving the original energy of Hamiltonian systems, in which an alternative technique is utilized so that no additional assumptions are required, in contrast to the proof strategies used for the existing EP-EI. Numerical experiments are provided to demonstrate the significant advantages in accuracy, computational efficiency, and the ability to capture highly oscillatory solutions.

北京阿比特科技有限公司