Scale AI logo
SEAL Logo

Japanese

Deprecated (as of March 2025)

Introduction

The Scale AI Multilingual Prompts Dataset, composed of 1,000 prompts per language, is tailored to enhance models’ interaction capabilities across multiple languages. This dataset specifically aims to refine chatbots' proficiency in engaging with Japanese users, reflecting complexity of Japanese communication.

Dataset Description

This dataset introduces single-turn prompts across a diversified range of scenarios, aiming to evaluate and improve models’ responses in both general and culturally nuanced conversations.

Every prompt is designed to have a clear goal/request/instruction that specifies what needs to be achieved by the response and the objective is relevant to the category assigned to the task. In addition, the complexity of the prompts varies by adding different combinations of the following building blocks: Local Context, Limitations, Role, Audience, Style and Format.

For Limitations, every prompt can define one or several constraints. As for Role, specifying it ensures the model’s responses are judged within a specific, well-defined context, which aligns the evaluation criteria with the expected output. This approach provides consistency, improves relevance, and allows for a more targeted assessment of the model's ability to emulate expertise in a given domain. 

The evaluation process intentionally focuses on more complex use cases, which differ from the simpler and more straightforward conversations between a regular user and a chatbot. This approach more effectively differentiates models based on their varying underlying capabilities.

 
Category
Definition
Coding & Technical Assistance
Assist with writing, understanding, and fixing code in a simple, clear manner for those working on programming projects.
Creative Expression
Encourage creativity in music, writing, and art by providing easy-to-follow guidance and inspiration for artists.
Daily Life Assistance
Offer advice on everyday tasks like cooking, booking services, and organizing schedules in a straightforward, practical way.
Educational Support
Help students with their studies, homework, and any school-related problems in a way that's easy to understand.
Entertainment & Recreation
Create fun experiences with games, trivia, sports discussions, and interactive stories for people of all ages.
Idea Development & Content Exploration
Help generate new ideas, answer questions, and clarify thoughts in a clear, accessible way for creative and curious minds.
Information & Learning
Provide easy explanations of complex topics, help with learning new skills, and give advice on a wide range of subjects.
Other
Cover any other tasks not listed above, providing a flexible and understandable approach for a variety of needs.
Personal & Professional Organization
Help people plan their personal and work lives, including events, budgets, and projects, with simple, actionable advice.
Shopping & Consumer Research
Guide on making smart shopping choices with clear comparisons, recommendations, and product insights.
Travel Assistance
Make travel planning simple by providing easy-to-understand advice on destinations, planning, and local insights.
Workplace Productivity
Help professionals streamline their work with tips on writing, presenting, organizing, and more, in a straightforward manner.
Writing & Communication
Improve communication skills with tips on writing everything from social media posts to professional emails, in a clear and engaging way.

Data Sample

1 / 13
Category:
Coding & Technical Assistance
Definition:
Assist with writing, understanding, and fixing code in a simple, clear manner for those working on programming projects.

C言語で文字列を大文字から小文字に変換する方法を教えてください。
サンプルコードと、そのサンプルコードを実行した結果も一緒にお願いします。

Construction Process

Development of this dataset followed a structured approach:

  • Original Content Requirement: Unique content generation was enforced, prohibiting the use of existing resources or models.
  • Review Stages: The content underwent qualitative and grammatical assessments.
  • Final Quality Audit: A final evaluation on a select sample refined the dataset to 1,000 prompts.

Experts fluent in various languages with cultural knowledge were chosen to contribute, ensuring the dataset's relevance and authenticity.

Quality was maintained through:

  • Multi-stage Reviews: Ensuring clarity, complexity, and cultural specificity.
  • Internal Benchmarking: Monitoring and assessing annotator performance.
  • Final Audits: Revising or removing prompts that did not meet quality standards.

Evaluation Taxonomy

To capture a nuanced assessment of both general language understanding and of regionalized styles from Japan, we created an evaluation taxonomy specific to evaluating this context. Each model response was evaluated across a set of stand-alone criteria, covering a diverse set of use cases, and side-by-side with another model response to measure preference ranking on a 7-point Likert scale.

Main stand-alone criteria and their sub-categories:

  1. Honesty:
    1. Understanding: 
      1. Prompt Understanding - Language Consistency: The response uses the same language, unless explicitly requested. 
      2. Localization: The model understands the context based on the language, it is appropriate for your locale, by aligning with the culture.
    2. Consistency:
      1. The response is consistent as all statements are coherent and no sequential statements contradict prior statements.
    3. Factual Accuracy:
      1. Claim Factuality: The response made true and accurate claims that can be fact-checked and supported. 
      2. Claim Verifiability: The response makes verifiable claims and does not present opinions as facts.
  2. Helpfulness:
    1. Instruction Following:
      1. Response Relevance: Assess if all model response content is relevant to address the prompt request.
      2. Main Request Adherence:  Evaluate how well the response adheres to the main request specified in the prompt. 
      3. Constraint Adherence: Evaluate if the response covers all constraints specified in the prompt.
    2. Writing:
      1. Mechanics: Are the spelling, grammar, punctuation, and syntax all correct?
      2. Structural Clarity: Is the response well-organized with clear, logical flow and appropriate formatting?
      3. Verbosity: Is the response concise as it does not repeat the same arguments and is not overly wordy in its explanation?
      4. Fluency: The response is easy to understand and sounds natural. There are no weird sentence constructions inherited by languages like English.
      5. Style and tone: Is the style and tone of the writing suitable for the topic and audience?
  3. Harmlessness:
    1. Is the response free from harmful content and does it not facilitate harmful action?
      1. No: The response is free from harmful content and does not facilitate harmful actions.
      2. Yes: The response contains harmful elements or encourages harmful actions.

Each dimension’s sub-category is rated on whether it complies or not with the sub-category definition (i.e. ‘yes’ or ‘no’ binary score).

After the evaluation of stand-alone criteria, responses are compared side-by-side using a Likert scale. This comparative assessment helps in identifying the preferable model response based on a detailed justification tied to the evaluation criteria. We rank the models by the side-by-side elo scores for the leaderboard rankings.

Evaluation Methodology

Each model pairs with other models and  receives a randomly chosen prompt from the set of 1,000 prompts described above.

Each evaluation tasks consists of the following:

  • Two models generate the responses for a prompt
  • Annotators provide a point-wise evaluation of each response
  • Annotators express their preference between the two scores on a 7-point Likert scale

To ensure thoroughness and reliability in the evaluation process, each task was executed in parallel 3 times by different human annotators. Then, the ratings were then reviewed in two stages: an initial review layer and a final review layer. The figure below provides an overview of the evaluation pipeline design. After finalizing the tasks, a team of internal independent auditors randomly selected and reviewed 10% of the tasks for quality control.

Evaluation Methodology - Pipeline Design

Leaderboard rankings are determined using Rank (Upper Bound), which reflects a model’s statistical position based on confidence intervals. The ranking process follows these steps:
  1. Count the number of models that are statistically significantly better than the target model.
  2. Add 1 to this count to determine the model’s rank.
A model is considered statistically significantly better than another if its lower-bound score (95% confidence interval) is higher than the other model’s upper-bound score.Models receive the same rank when the same number of models are statistically better than each of them. This approach groups models based on statistical significance rather than raw scores, ensuring rankings reflect meaningful performance differences.
Last updated: March 27, 2025

Performance Comparison

1

o1-preview

1118.00±43.00

2

Gemini 2.0 Pro (December 2024)

1117.00±36.00

3

o3-mini

1104.00±31.00

4

Gemini 1.5 Pro (August 27, 2024)

1093.00±44.00

5

o1 (December 2024)

1086.00±30.00

6

Gemini 1.5 Pro (November 2024)

1084.00±30.00

7

GPT-4o (August 2024)

1077.00±42.00

8

Claude 3.5 Sonnet (June 2024)

1069.00±36.00

9

Gemini Pro Flash 2

1064.00±33.00

10

GPT-4 (November 2024)

978.00±27.00

11

Mistral Large 2

967.00±41.00

12

Gemini 1.5 Flash

966.00±56.00

13

Aya Expanse 32B

960.00±32.00

14

Llama 3.1 405B Instruct

895.00±83.00

15

Gemma 2 27B

892.00±42.00

16

Llama 3.3 70B Instruct

778.00±35.00

17

Aya 23 35B*

760.00±48.00