Apple Releases Pico-Banana-400K Dataset to Advance Text-Guided Image Editing
Briefly

Apple Releases Pico-Banana-400K Dataset to Advance Text-Guided Image Editing
"Pico-Banana-400K is a curated dataset of 400,000 images developed by Apple researchers to make it easier to create text-guided image editing models. The images were generated using Google's Nano-Banana to modify real photographs from the Open Images collecion and were then filtered using Gemini-2.5-Pro based on their overall quality and prompt compliance. According to the researchers, the dataset aims to close a gap in the availability of large-scale, high-quality, and fully shareable image editing datasets."
"What distinguishes Pico-Banana-400K from previous synthetic datasets is our systematic approach to quality and diversity. We employ a fine-grained image editing taxonomy to ensure comprehensive coverage of edit types while maintaining precise content preservation and instruction faithfulness through MLLM-based quality scoring and careful curation. As mentioned, Apple researchers started selecting a number of real photographs from Open Images, including humans, objects, and textual scenes."
"The evaluation criteria used to determine success or failure include instruction compliance (40%), editing realism (25%), preservation balance (20%), and technical quality (15%). About 56K generated images were retained as failure cases for robustness and preference learning. The researchers devised 35 types of edits, organized into eight categories, including pixel and photometric adjustments (e.g., change overall color tone), object-level semantics (e.g., relocate an object, change an object's color), scene composition (e.g., add new background),"
Pico-Banana-400K contains 400,000 images created to support text-guided image editing model development. Real photographs from Open Images were edited using Google's Nano-Banana under a set of prompts and then filtered with Gemini-2.5-Pro for quality and prompt adherence. The dataset emphasizes systematic quality and diversity via a fine-grained image editing taxonomy and MLLM-based scoring. Evaluation weights are instruction compliance (40%), editing realism (25%), preservation balance (20%), and technical quality (15%). About 56K generated images were kept as failure cases for robustness and preference learning. The dataset includes 35 edit types across eight categories.
Read at InfoQ
Unable to calculate read time
[
|
]