亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A Mathematical Program with Equilibrium Constraints (MPEC) is formulated to capture the relationships between multiple Mobility Service Providers (MSPs) and the users of a multi-modal transport network. The network supply structure is defined through a novel supernetwork approach where users' daily trip chains are represented to model the mobility services used to reach each destination. At the upper level, a profit maximization formulation is introduced to describe each MSPs' behaviour. At the lower level, users within a class choose minimum cost routes, according to Wardrop's first equilibrium principle. To consider the interactions between modes, non-separable costs between supernetwork links are defined, and users' equilibrium conditions are formulated as a Variational Inequality (VI). To solve the MPEC, an iterative solution algorithm based on a Modified Projection Method is proposed. Numerical examples are presented to illustrate properties of the model, and to examine scenarios showcasing cooperation or competition strategies between MSPs.

相關內容

IFIP TC13 Conference on Human-Computer Interaction是人機交互領域的研究者和實踐者展示其工作的重要平臺。多年來,這些會議吸引了來自幾個國家和文化的研究人員。官網鏈接: · 操作 · binary · 外部記憶 · CASES ·
2023 年 7 月 10 日

The BDD package Adiar manipulates Binary Decision Diagrams (BDDs) in external memory. This enables handling big BDDs, but the performance suffers when dealing with moderate-sized BDDs. This is mostly due to initializing expensive external memory data structures, even if their contents can fit entirely inside internal memory. The contents of these auxiliary data structures always correspond to a graph cut in an input or output BDD. Specifically, these cuts respect the levels of the BDD. We formalise the shape of these cuts and prove sound upper bounds on their maximum size for each BDD operation. We have implemented these upper bounds within Adiar. With these bounds, it can predict whether a faster internal memory variant of the auxiliary data structures can be used. In practice, this improves Adiar's running time across the board. Specifically for the moderate-sized BDDs, this results in an average reduction of the computation time by 86.1% (median of 89.7%). In some cases, the difference is even 99.9\%. When checking equivalence of hardware circuits from the EPFL Benchmark Suite, for one of the instances the time was decreased by 52 hours.

In the field of Artificial Intelligence (AI) and Machine Learning (ML), the approximation of unknown target functions $y=f(\mathbf{x})$ using limited instances $S={(\mathbf{x^{(i)}},y^{(i)})}$, where $\mathbf{x^{(i)}} \in D$ and $D$ represents the domain of interest, is a common objective. We refer to $S$ as the training set and aim to identify a low-complexity mathematical model that can effectively approximate this target function for new instances $\mathbf{x}$. Consequently, the model's generalization ability is evaluated on a separate set $T=\{\mathbf{x^{(j)}}\} \subset D$, where $T \neq S$, frequently with $T \cap S = \emptyset$, to assess its performance beyond the training set. However, certain applications require accurate approximation not only within the original domain $D$ but also in an extended domain $D'$ that encompasses $D$. This becomes particularly relevant in scenarios involving the design of new structures, where minimizing errors in approximations is crucial. For example, when developing new materials through data-driven approaches, the AI/ML system can provide valuable insights to guide the design process by serving as a surrogate function. Consequently, the learned model can be employed to facilitate the design of new laboratory experiments. In this paper, we propose a method for multivariate regression based on iterative fitting of a continued fraction, incorporating additive spline models. We compare the performance of our method with established techniques, including AdaBoost, Kernel Ridge, Linear Regression, Lasso Lars, Linear Support Vector Regression, Multi-Layer Perceptrons, Random Forests, Stochastic Gradient Descent, and XGBoost. To evaluate these methods, we focus on an important problem in the field: predicting the critical temperature of superconductors based on physical-chemical characteristics.

We present a fully polynomial-time approximation scheme (FPTAS) for computing equilibria in congestion games, under smoothed running-time analysis. More precisely, we prove that if the resource costs of a congestion game are randomly perturbed by independent noises, whose density is at most $\phi$, then any sequence of $(1+\varepsilon)$-improving dynamics will reach an $(1+\varepsilon)$-approximate pure Nash equilibrium (PNE) after an expected number of steps which is strongly polynomial in $\frac{1}{\varepsilon}$, $\phi$, and the size of the game's description. Our results establish a sharp contrast to the traditional worst-case analysis setting, where it is known that better-response dynamics take exponentially long to converge to $\alpha$-approximate PNE, for any constant factor $\alpha\geq 1$. As a matter of fact, computing $\alpha$-approximate PNE in congestion games is PLS-hard. We demonstrate how our analysis can be applied to various different models of congestion games including general, step-function, and polynomial cost, as well as fair cost-sharing games (where the resource costs are decreasing). It is important to note that our bounds do not depend explicitly on the cardinality of the players' strategy sets, and thus the smoothed FPTAS is readily applicable to network congestion games as well.

Global contexts in images are quite valuable in image-to-image translation problems. Conventional attention-based and graph-based models capture the global context to a large extent, however, these are computationally expensive. Moreover, the existing approaches are limited to only learning the pairwise semantic relation between any two points on the image. In this paper, we present Latent Graph Attention (LGA) a computationally inexpensive (linear to the number of nodes) and stable, modular framework for incorporating the global context in the existing architectures, especially empowering small-scale architectures to give performance closer to large size architectures, thus making the light-weight architectures more useful for edge devices with lower compute power and lower energy needs. LGA propagates information spatially using a network of locally connected graphs, thereby facilitating to construct a semantically coherent relation between any two spatially distant points that also takes into account the influence of the intermediate pixels. Moreover, the depth of the graph network can be used to adapt the extent of contextual spread to the target dataset, thereby being able to explicitly control the added computational cost. To enhance the learning mechanism of LGA, we also introduce a novel contrastive loss term that helps our LGA module to couple well with the original architecture at the expense of minimal additional computational load. We show that incorporating LGA improves the performance on three challenging applications, namely transparent object segmentation, image restoration for dehazing and optical flow estimation.

Prior knowledge and symbolic rules in machine learning are often expressed in the form of label constraints, especially in structured prediction problems. In this work, we compare two common strategies for encoding label constraints in a machine learning pipeline, regularization with constraints and constrained inference, by quantifying their impact on model performance. For regularization, we show that it narrows the generalization gap by precluding models that are inconsistent with the constraints. However, its preference for small violations introduces a bias toward a suboptimal model. For constrained inference, we show that it reduces the population risk by correcting a model's violation, and hence turns the violation into an advantage. Given these differences, we further explore the use of two approaches together and propose conditions for constrained inference to compensate for the bias introduced by regularization, aiming to improve both the model complexity and optimal risk.

We propose gated language experts and curriculum training to enhance multilingual transformer transducer models without requiring language identification (LID) input from users during inference. Our method incorporates a gating mechanism and LID loss, enabling transformer experts to learn language-specific information. By combining gated transformer experts with shared transformer layers, we construct multilingual transformer blocks and utilize linear experts to effectively regularize the joint network. The curriculum training scheme leverages LID to guide the gated experts in improving their respective language performance. Experimental results on a bilingual task involving English and Spanish demonstrate significant improvements, with average relative word error reductions of 12.5% and 7.3% compared to the baseline bilingual and monolingual models, respectively. Notably, our method achieves performance comparable to the upper-bound model trained and inferred with oracle LID. Extending our approach to trilingual, quadrilingual, and pentalingual models reveals similar advantages to those observed in the bilingual models, highlighting its ease of extension to multiple languages.

We study the logit evolutionary dynamics in population games. For general population games, we prove that, on the one hand strict Nash equilibria are locally asymptotically stable under the logit dynamics in the low noise regime, on the other hand a globally exponentially stable fixed point exists in the high noise regime. This suggests the emergence of bifurcations in population games admitting multiple strict Nash equilibria, as verified numerically in previous publications. We then prove sufficient conditions on the game structure for global asymptotic stability of the logit dynamics in every noise regime. Our results find application in particular to heterogeneous routing games, a class of non-potential population games modelling strategic decision-making of users having heterogeneous preferences in transportation networks. In this setting, preference heterogeneities are due, e.g., to access to different sources of information or to different trade-offs between time and money. We show that if the transportation network has parallel routes, then the unique equilibrium of the game is globally asymptotically stable.

With social media, the flow of uncertified information is constantly increasing, with the risk that more people will trust low-credible information sources. To design effective strategies against this phenomenon, it is of paramount importance to understand how people end up believing one source rather than another. To this end, we propose a realistic and cognitively affordable heuristic mechanism for opinion formation inspired by the well-known belief propagation algorithm. In our model, an individual observing a network of information sources must infer which of them are reliable and which are not. We study how the individual's ability to identify credible sources, and hence to form correct opinions, is affected by the noise in the system, intended as the amount of disorder in the relationships between the information sources in the network. We find numerically and analytically that there is a critical noise level above which it is impossible for the individual to detect the nature of the sources. Moreover, by comparing our opinion formation model with existing ones in the literature, we show under what conditions people's opinions can be reliable. Overall, our findings imply that the increasing complexity of the information environment is a catalyst for misinformation channels.

Gaussian variational inference and the Laplace approximation are popular alternatives to Markov chain Monte Carlo that formulate Bayesian posterior inference as an optimization problem, enabling the use of simple and scalable stochastic optimization algorithms. However, a key limitation of both methods is that the solution to the optimization problem is typically not tractable to compute; even in simple settings the problem is nonconvex. Thus, recently developed statistical guarantees -- which all involve the (data) asymptotic properties of the global optimum -- are not reliably obtained in practice. In this work, we provide two major contributions: a theoretical analysis of the asymptotic convexity properties of variational inference with a Gaussian family and the maximum a posteriori (MAP) problem required by the Laplace approximation; and two algorithms -- consistent Laplace approximation (CLA) and consistent stochastic variational inference (CSVI) -- that exploit these properties to find the optimal approximation in the asymptotic regime. Both CLA and CSVI involve a tractable initialization procedure that finds the local basin of the optimum, and CSVI further includes a scaled gradient descent algorithm that provably stays locally confined to that basin. Experiments on nonconvex synthetic and real-data examples show that compared with standard variational and Laplace approximations, both CSVI and CLA improve the likelihood of obtaining the global optimum of their respective optimization problems.

Many food products involve mixtures of ingredients, where the mixtures can be expressed as combinations of ingredient proportions. In many cases, the quality and the consumer preference may also depend on the way in which the mixtures are processed. The processing is generally defined by the settings of one or more process variables. Experimental designs studying the joint impact of the mixture ingredient proportions and the settings of the process variables are called mixture-process variable experiments. In this article, we show how to combine mixture-process variable experiments and discrete choice experiments, to quantify and model consumer preferences for food products that can be viewed as processed mixtures. First, we describe the modeling of data from such combined experiments. Next, we describe how to generate D- and I-optimal designs for choice experiments involving mixtures and process variables, and we compare the two kinds of designs using two examples.

北京阿比特科技有限公司