The Scale AI Multilingual Prompts Dataset, composed of 1,000 prompts per language, is tailored to enhance models’ interaction capabilities across multiple languages. This dataset specifically aims to refine chatbots' proficiency in engaging with Korean users, reflecting complexity of Korean communication.
This dataset introduces single-turn prompts across a diversified range of scenarios, aiming to evaluate and improve models’ responses in both general and culturally nuanced conversations.
Every prompt is designed to have a clear goal/request/instruction that specifies what needs to be achieved by the response and the objective is relevant to the category assigned to the task. In addition, the complexity of the prompts varies by adding different combinations of the following building blocks: Local Context, Limitations, Role, Audience, Style and Format.
For Limitations, every prompt can define one or several constraints. As for Role, specifying it ensures the model’s responses are judged within a specific, well-defined context, which aligns the evaluation criteria with the expected output. This approach provides consistency, improves relevance, and allows for a more targeted assessment of the model's ability to emulate expertise in a given domain.
The evaluation process intentionally focuses on more complex use cases, which differ from the simpler and more straightforward conversations between a regular user and a chatbot. This approach more effectively differentiates models based on their varying underlying capabilities.
The evaluation process intentionally focuses on more complex use cases, which differ from the simpler and more straightforward conversations between a regular user and a chatbot. This approach more effectively differentiates models based on their varying underlying capabilities.
Category | Definition |
---|---|
Coding & Technical Assistance | Assist with writing, understanding, and fixing code in a simple, clear manner for those working on programming projects. |
Creative Expression | Encourage creativity in music, writing, and art by providing easy-to-follow guidance and inspiration for artists. |
Daily Life Assistance | Offer advice on everyday tasks like cooking, booking services, and organizing schedules in a straightforward, practical way. |
Educational Support | Help students with their studies, homework, and any school-related problems in a way that's easy to understand. |
Entertainment & Recreation | Create fun experiences with games, trivia, sports discussions, and interactive stories for people of all ages. |
Idea Development & Content Exploration | Help generate new ideas, answer questions, and clarify thoughts in a clear, accessible way for creative and curious minds. |
Information & Learning | Provide easy explanations of complex topics, help with learning new skills, and give advice on a wide range of subjects. |
Other | Cover any other tasks not listed above, providing a flexible and understandable approach for a variety of needs. |
Personal & Professional Organization | Help people plan their personal and work lives, including events, budgets, and projects, with simple, actionable advice. |
Shopping & Consumer Research | Guide on making smart shopping choices with clear comparisons, recommendations, and product insights. |
Travel Assistance | Make travel planning simple by providing easy-to-understand advice on destinations, planning, and local insights. |
Workplace Productivity | Help professionals streamline their work with tips on writing, presenting, organizing, and more, in a straightforward manner. |
Writing & Communication | Improve communication skills with tips on writing everything from social media posts to professional emails, in a clear and engaging way. |
[korean-data-1]
Development of this dataset followed a structured approach:
Original Content Requirement: Unique content generation was enforced, prohibiting the use of existing resources or models.
Review Stages: The content underwent qualitative and grammatical assessments.
Final Quality Audit: A final evaluation on a select sample refined the dataset to 1,000 prompts.
Experts fluent in various languages with cultural knowledge were chosen to contribute, ensuring the dataset's relevance and authenticity.
Quality was maintained through:
Multi-stage Reviews: Ensuring clarity, complexity, and cultural specificity.
Internal Benchmarking: Monitoring and assessing annotator performance.
Final Audits: Revising or removing prompts that did not meet quality standards.
To capture a nuanced assessment of both general language understanding and of regionalized styles from Korea, we created an evaluation taxonomy specific to evaluating this context. Each model response was evaluated across a set of stand-alone criteria, covering a diverse set of use cases, and side-by-side with another model response to measure preference ranking on a 7-point likert scale.
To capture a nuanced assessment of both general language understanding and of regionalized styles from Korea, we created an evaluation taxonomy specific to evaluating this context. Each model response was evaluated across a set of stand-alone criteria, covering a diverse set of use cases, and side-by-side with another model response to measure preference ranking on a 7-point Likert scale.
Main stand-alone criteria and their sub-categories:
Honesty:
Prompt Understanding - Language Consistency: The response uses the same language, unless explicitly requested.
Localization: The model understands the context based on the language, it is appropriate for your locale, by aligning with the culture.
The response is consistent as all statements are coherent and no sequential statements contradict prior statements.
Claim Factuality: The response made false and inaccurate claims that can be fact-checked and refuted.
Claim Verifiability: The response makes unverifiable claims and presents opinions as facts.
Understanding:
Consistency:
Factual Accuracy:
Helpfulness:
Response Relevance:
Point of interest: Assess if all model response content is relevant to address the prompt request.
Common Pitfall: If the model fulfills all the prompt requests but does more than the prompt requires, the response has no issues in “Prompt Adherence” and solely issues in “Prompt Relevance.
Main Request Adherence:
Point of interest: Evaluate how well the response adheres to the main request specified in the prompt.
Common Pitfall: Does not include Factual Accuracy of the model response. The model can follow the instructions completely but simultaneously make inaccurate statements.
Constraint Adherence:
Point of interest: Evaluate if the response covers all constraints specified in the prompt.
Common Pitfall: Incorrectly identify the constraints from other criteria, such as Request Adherence or Writing criteria and penalize this one. Failing the only constraint results in a more severe IF issue than failing 1 out of 5 constraints.
Mechanics: Are the spelling, grammar, punctuation, and syntax all correct?
Structural Clarity: Is the response well-organized with clear, logical flow and appropriate formatting?
Verbosity: Is the response concise as it does not repeat the same arguments and is not overly wordy in its explanation?
Fluency: The response is easy to understand and sounds natural. There are no weird sentence constructions inherited by languages like English.
Style and tone: Is the style and tone of the writing suitable for the topic and audience?
Instruction Following:
Writing:
Harmlessness:
No: The response is free from harmful content and does not facilitate harmful actions.
Yes: The response contains harmful elements or encourages harmful actions.
Is the response free from harmful content and does it not facilitate harmful action?
Each dimension’s sub-category is rated on whether it complies or not with the sub-category definition (i.e. ‘yes’ or ‘no’ binary score).
After the evaluation of stand-alone criteria, responses are compared side-by-side using a Likert scale. This comparative assessment helps in identifying the preferable model response based on a detailed justification tied to the evaluation criteria. We rank the models by the side-by-side elo scores for the leaderboard rankings.
Each model is randomly paired with other models, and each pairing receives a randomly chosen prompt from the set of 1,000 prompts described above.
Each evaluation tasks consists of the following:
Two models generate the responses for a prompt
Annotators provide a point-wise evaluation of each response
Annotators express their preference between the two scores on a 7-point Likert scale
To ensure thoroughness and reliability in the evaluation process, each task was executed in parallel 3 times by different human annotators. Then, the ratings were then reviewed in two stages: an initial review layer and a final review layer. The figure below provides an overview of the evaluation pipeline design. After finalizing the tasks, a team of internal independent auditors randomly selected and reviewed 10% of the tasks for quality control.
Evaluation Methodology - Pipeline Design
Leaderboard rankings are determined using Rank (Upper Bound), which reflects a model’s statistical position based on confidence intervals. The ranking process follows these steps:
Count the number of models that are statistically significantly better than the target model.
Add 1 to this count to determine the model’s rank.
A model is considered statistically significantly better than another if its lower-bound score (95% confidence interval) is higher than the other model’s upper-bound score.Models receive the same rank when the same number of models are statistically better than each of them. This approach groups models based on statistical significance rather than raw scores, ensuring rankings reflect meaningful performance differences.
o1-preview
66.43±5.47
Claude 3.7 Sonnet (February 2025)
64.93±5.51
GPT-4o (May 2024)
64.58±5.52
GPT-4.5 Preview (February 2025)
63.76±5.56
GPT-4 Turbo Preview
60.76±5.64
Gemini 1.5 Pro (August 27, 2024)
60.28±5.66
GPT-4o (August 2024)
59.93±5.67
Claude 3.5 Sonnet (June 2024)
59.38±5.67
Gemini 2.0 Flash
55.60±5.73
Claude 3 Sonnet
54.17±5.78
Claude 3 Opus
52.78±5.77
GPT-4o mini
51.74±5.77
GPT-4
51.39±5.77
Mistral Large 2
50.35±5.78
Llama 3.1 405B Instruct
50.35±5.78
Gemini 1.5 Pro (May 2024)
40.42±5.68
Llama 3.1 70B Instruct
37.23±5.60
Command R+
30.21±5.30
Llama 3.1 8B Instruct
17.42±4.39