NGL-Prompter: Training-free garment pattern estimationEstimating sewing patterns from images is a practical approach for creating high-quality 3D garments. Due to the lack of real-world pattern-image paired data, prior approaches fine-tune large vision-language models (VLMs) on synthetic garment datasets, limiting their generalization to real-world images.
We propose NGL (Natural Garment Language), a novel intermediate representation that restructures GarmentCode into a format more understandable to large language models, and NGL-Prompter, a training-free pipeline that queries large VLMs to extract structured garment parameters from a single image, which are then deterministically mapped to valid GarmentCode. Our approach achieves state-of-the-art performance on standard geometry metrics and is strongly preferred in both human and GPT-based perceptual evaluations. NGL-Prompter can recover multi-layer outfits whereas competing methods focus mostly on single-layer garments, highlighting its strong generalization to real-world images even with occluded parts.

The NGL-Prompter pipeline:
NGL comes in two variants:

NGL-Prompter is the first method to successfully reconstruct occluded, multi-layer garments without training – a significant advantage over existing approaches that are limited to single-layer garments.

Detailed comparisons demonstrate improved garment detail capture including skirt slits, mini dresses, and pant styles.
Geometry (Dress4D Dataset):
| Method | Chamfer Distance | F-Score |
|---|---|---|
| ChatGarment | 3.99 | 0.78 |
| NGL-0 (Qwen) | 2.03 | 0.82 |
Perceptual Evaluations (ASOS 5K):
@article{badalyan2026ngl,
title = {{NGL-Prompter}: Training-Free Sewing Pattern Estimation from a Single Image},
author = {Badalyan, Anna and Selvaraju, Pratheba and Becherini, Giorgio and Taheri, Omid and Fernandez Abrevaya, Victoria and Black, Michael},
journal = {arXiv preprint arXiv:2602.20700},
year = {2026},
}