亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we present the derivation of a multicontinuum model for the coupled flow and transport equations by applying multicontinuum homogenization. We perform the multicontinuum expansion for both flow and transport solutions and formulate novel coupled constraint cell problems to capture the multiscale property, where oversampled regions are utilized to avoid boundary effects. Assuming the smoothness of macroscopic variables, we obtain a multicontinuum system composed of macroscopic elliptic equations and convection-diffusion-reaction equations with homogenized effective properties. Finally, we present numerical results for various coefficient fields and boundary conditions to validate our proposed algorithm.

相關內容

In this paper, we propose reverse inference optimization (RIO), a simple and effective method designed to enhance the robustness of autoregressive-model-based zero-shot text-to-speech (TTS) systems using reinforcement learning from human feedback (RLHF). To assess the quality of speech produced by the TTS system without human annotations, RIO introduces a novel concept termed as reverse inference based on the Bayesian principle, which suggests that a high-quality generated speech should be able to be used as a prompt for subsequent generation using the same TTS model. By leveraging reverse inference as the standard to select exemplars used in RLHF from the speech samples generated by the TTS system itself, RIO steers the subsequent optimization towards a direction of enhancing the TTS robustness. The RIO framework, comprising sampling, automatic annotating, and learning, obviates the need for a reward model or pairwise preference data, and significantly improves the stability of zero-shot TTS performance by reducing the discrepancies between training and inference conditions. Our experimental results verify that RIO can effectively improve both subjective and objective metrics, including mean opinion scores, word error rates, and speaker similarity. Remarkably, RIO can also diminish the incidence of bad outputs to nearly zero percent, rivalling the robustness when using ground-truth speech as the prompt.

In this research, we examine the minsum flow problem in dynamic path networks where flows are represented as discrete and weighted sets. The minsum flow problem has been widely studied for its relevance in finding evacuation routes during emergencies such as earthquakes. However, previous approaches often assume that individuals are separable and identical, which does not adequately account for the fact that some groups of people, such as families, need to move together and that some groups may be more important than others. To address these limitations, we modify the minsum flow problem to support flows represented as discrete and weighted sets. We also propose a 2-approximation pseudo-polynomial time algorithm to solve this modified problem for path networks with uniform capacity.

In this work, we present a comprehensive three-phase study to examine (1) the effectiveness of large multimodal models (LMMs) in recognizing cultural contexts; (2) the accuracy of their representations of diverse cultures; and (3) their ability to adapt content across cultural boundaries. We first introduce Dalle Street, a large-scale dataset generated by DALL-E 3 and validated by humans, containing 9,935 images of 67 countries and 10 concept classes. We reveal disparities in cultural understanding at the sub-region level with both open-weight (LLaVA) and closed-source (GPT-4V) models on Dalle Street and other existing benchmarks. Next, we assess models' deeper culture understanding by an artifact extraction task and identify over 18,000 artifacts associated with different countries. Finally, we propose a highly composable pipeline, CultureAdapt, to adapt images from culture to culture. Our findings reveal a nuanced picture of the cultural competence of LMMs, highlighting the need to develop culture-aware systems. Dataset and code are available at //github.com/iamshnoo/crossroads

In this paper, we focus on methods to reduce the size and improve the quality of the prompt context required for question-answering systems. Attempts to increase the number of retrieved chunked documents and thereby enlarge the context related to the query can significantly complicate the processing and decrease the performance of a Large Language Model (LLM) when generating responses to queries. It is well known that a large set of documents retrieved from a database in response to a query may contain irrelevant information, which often leads to hallucinations in the resulting answers. Our goal is to select the most semantically relevant documents, treating the discarded ones as outliers. We propose and evaluate several methods for identifying outliers by creating features that utilize the distances of embedding vectors, retrieved from the vector database, to both the centroid and the query vectors. The methods were evaluated by comparing the similarities of the retrieved LLM responses to ground-truth answers obtained using the OpenAI GPT-4o model. It was found that the greatest improvements were achieved with increasing complexity of the questions and answers.

In this paper, our aim is to analyse the generalization capabilities of first-order methods for statistical learning in multiple, different yet related, scenarios including supervised learning, transfer learning, robust learning and federated learning. To do so, we provide sharp upper and lower bounds for the minimax excess risk of strongly convex and smooth statistical learning when the gradient is accessed through partial observations given by a data-dependent oracle. This novel class of oracles can query the gradient with any given data distribution, and is thus well suited to scenarios in which the training data distribution does not match the target (or test) distribution. In particular, our upper and lower bounds are proportional to the smallest mean square error achievable by gradient estimators, thus allowing us to easily derive multiple sharp bounds in the aforementioned scenarios using the extensive literature on parameter estimation.

In this paper, we introduce innovative approaches for accelerating the Jacobi method for matrix diagonalization, specifically through the formulation of large matrix diagonalization as a Semi-Markov Decision Process and small matrix diagonalization as a Markov Decision Process. Furthermore, we examine the potential of utilizing scalable architecture between different-sized matrices. During a short training period, our method discovered a significant reduction in the number of steps required for diagonalization and exhibited efficient inference capabilities. Importantly, this approach demonstrated possible scalability to large-sized matrices, indicating its potential for wide-ranging applicability. Upon training completion, we obtain action-state probabilities and transition graphs, which depict transitions between different states. These outputs not only provide insights into the diagonalization process but also pave the way for cost savings pertinent to large-scale matrices. The advancements made in this research enhance the efficacy and scalability of matrix diagonalization, pushing for new possibilities for deployment in practical applications in scientific and engineering domains.

In this paper, we focus on the design of binary constant weight codes that admit low-complexity encoding and decoding algorithms, and that have a size $M=2^k$. For every integer $\ell \geq 3$, we construct a $(n=2^\ell, M=2^{k_{\ell}}, d=2)$ constant weight code ${\cal C}[\ell]$ of weight $\ell$ by encoding information in the gaps between successive $1$'s. The code is associated with an integer sequence of length $\ell$ with a constraint defined as {\em anchor-decodability} that ensures low complexity for encoding and decoding. The complexity of the encoding is linear in the input size $k$, and that of the decoding is poly-logarithmic in the input size $n$, discounting the linear time spent on parsing the input. Both the algorithms do not require expensive computation of binomial coefficients, unlike the case in many existing schemes. Among codes generated by all anchor-decodable sequences, we show that ${\cal C}[\ell]$ has the maximum size with $k_{\ell} \geq \ell^2-\ell\log_2\ell + \log_2\ell - 0.279\ell - 0.721$. As $k$ is upper bounded by $\ell^2-\ell\log_2\ell +O(\ell)$ information-theoretically, the code ${\cal C}[\ell]$ is optimal in its size with respect to two higher order terms of $\ell$. In particular, $k_\ell$ meets the upper bound for $\ell=3$ and one-bit away for $\ell=4$. On the other hand, we show that ${\cal C}[\ell]$ is not unique in attaining $k_{\ell}$ by constructing an alternate code ${\cal \hat{C}}[\ell]$ again parameterized by an integer $\ell \geq 3$ with a different low-complexity decoder, yet having the same size $2^{k_{\ell}}$ when $3 \leq \ell \leq 7$. Finally, we also derive new codes by modifying ${\cal C}[\ell]$ that offer a wider range on blocklength and weight while retaining low complexity for encoding and decoding. For certain selected values of parameters, these modified codes too have an optimal $k$.

In this work, we propose using mechanistic interpretability -- techniques for reverse engineering model weights into human-interpretable algorithms -- to derive and compactly prove formal guarantees on model performance. We prototype this approach by formally proving lower bounds on the accuracy of 151 small transformers trained on a Max-of-$K$ task. We create 102 different computer-assisted proof strategies and assess their length and tightness of bound on each of our models. Using quantitative metrics, we find that shorter proofs seem to require and provide more mechanistic understanding. Moreover, we find that more faithful mechanistic understanding leads to tighter performance bounds. We confirm these connections by qualitatively examining a subset of our proofs. Finally, we identify compounding structureless noise as a key challenge for using mechanistic interpretability to generate compact proofs on model performance.

In this paper, we tackle two challenges in multimodal learning for visual recognition: 1) when missing-modality occurs either during training or testing in real-world situations; and 2) when the computation resources are not available to finetune on heavy transformer models. To this end, we propose to utilize prompt learning and mitigate the above two challenges together. Specifically, our modality-missing-aware prompts can be plugged into multimodal transformers to handle general missing-modality cases, while only requiring less than 1% learnable parameters compared to training the entire model. We further explore the effect of different prompt configurations and analyze the robustness to missing modality. Extensive experiments are conducted to show the effectiveness of our prompt learning framework that improves the performance under various missing-modality cases, while alleviating the requirement of heavy model re-training. Code is available.

In this paper, we propose the joint learning attention and recurrent neural network (RNN) models for multi-label classification. While approaches based on the use of either model exist (e.g., for the task of image captioning), training such existing network architectures typically require pre-defined label sequences. For multi-label classification, it would be desirable to have a robust inference process, so that the prediction error would not propagate and thus affect the performance. Our proposed model uniquely integrates attention and Long Short Term Memory (LSTM) models, which not only addresses the above problem but also allows one to identify visual objects of interests with varying sizes without the prior knowledge of particular label ordering. More importantly, label co-occurrence information can be jointly exploited by our LSTM model. Finally, by advancing the technique of beam search, prediction of multiple labels can be efficiently achieved by our proposed network model.

北京阿比特科技有限公司