Hand1000: Generating Realistic Hands from Text with Only 1,000 Images

Haozhuo Zhang1, Bin Zhu2*, Yu Cao2, Yanbin Hao3,
1Peking University
2Singapore Management University
3University of Science and Technology of China

*Indicates the corresponding author
Description of the image

Comparison of hand image generation results between Stable Diffusion and our Hand1000. Given the same text prompt, Stable Diffusion produces deformed and chaotic hands. In contrast, our proposed Hand1000 manages to generate anatomically correct and realistic hands while preserving details such as character, clothing, and colors.

Abstract

Text-to-image generation models have achieved remarkable advancements in recent years, aiming to produce realistic images from textual descriptions. However, these models often struggle with generating anatomically accurate representations of human hands. The resulting images frequently exhibit issues such as incorrect numbers of fingers, unnatural twisting or interlacing of fingers, or blurred and indistinct hands. These issues stem from the inherent complexity of hand structures and the difficulty in aligning textual descriptions with precise visual depictions of hands. To address these challenges, we propose a novel approach named Hand1000 that enables the generation of realistic hand images with target gesture using only 1,000 training samples. The training of Hand1000 is divided into three stages with the first stage aiming to enhance the model’s understanding of hand anatomy by using a pre-trained hand gesture recognition model to extract gesture representation. The second stage further optimizes text embedding by incorporating the extracted hand gesture representation, to improve alignment between the textual descriptions and the generated hand images. The third stage utilizes the optimized embedding to fine-tune the Stable Diffusion model to generate realistic hand images. In addition, we construct the first publicly available dataset specifically designed for text-to-hand image generation. Based on the existing hand gesture recognition dataset, we adopt advanced image captioning models and LLaMA3 to generate high-quality textual descriptions enriched with detailed gesture information. Extensive experiments demonstrate that Hand1000 significantly outperforms existing models in producing anatomically correct hand images while faithfully representing other details in the text, such as faces, clothing and colors.

Architecture

Description of the image

The proposed Hand1000 is designed with a three-stage training process. In Stage I, the primary objective is to compute mean hand gesture feature from images. Stage II builds on this by concatenating the mean hand gesture feature obtained in Stage I with the corresponding text embeddings. These concatenated features are then mapped into a fused embedding, which is further enhanced by linearly fusing it with the original text embedding, resulting in a double-fused embedding. This embedding is optimized using a reconstruction loss through a frozen Stable Diffusion model, ensuring that the final embedding is well-optimized. Stage III involves fine-tuning the Stable Diffusion model for image generation, leveraging the frozen optimized embedding obtained from Stage II.

Dataset Construction

Description of the image

The dataset construction begins with generating a textual description using an image captioning model (e.g., BLIP2) from image. The textual description, along with gesture labels, is then fed into the LLaMA3 model to produce a text description enriched with gesture label information.

Generated Images Comparison

Description of the image

Comparison of images in hand gesture of four fingers up generated by stable diffusion, fine-tuned stable diffusion, stable diffusion enhanced with Imagic, fine-tuned stable diffusion enhanced with Imagic and our Hand1000.

More Results: Stable Diffusion vs Hand1000

BibTeX

@misc{zhang2024hand1000generatingrealistichands,
        title={Hand1000: Generating Realistic Hands from Text with Only 1,000 Images}, 
        author={Haozhuo Zhang and Bin Zhu and Yu Cao and Yanbin Hao},
        year={2024},
        eprint={2408.15461},
        archivePrefix={arXiv},
        primaryClass={cs.CV},
        url={https://arxiv.org/abs/2408.15461}, 
  }