SesaHand: Enhancing 3D Hand Reconstruction via Controllable Generation with Semantic and Structural Alignment
Abstract
Recent studies on 3D hand reconstruction have demonstrated the effectiveness of synthetic training data to improve estimation performance. However, most methods rely on game engines to synthesize hand images, which often lack diversity in textures and environments, and fail to include crucial components like arms or interacting objects. Generative models are promising alternatives to generate diverse hand images, but still suffer from misalignment issues. In this paper, we present SesaHand, which enhances controllable hand image generation from both semantic and structural alignment perspectives for 3D hand reconstruction. Specifically, for semantic alignment, we propose a pipeline with Chain-of-Thought inference to extract human behavior semantics from image captions generated by the Vision-Language Model. This semantics suppresses human-irrelevant environmental details and ensures sufficient human-centric contexts for hand image generation. For structural alignment, we introduce hierarchical structural fusion to integrate structural information with different granularity for feature refinement to better align the hand and the overall human body in generated images. We further propose a hand structure attention enhancement method to efficiently enhance the model's attention on hand regions. Experiments demonstrate that our method not only outperforms prior work in generation performance but also improves 3D hand reconstruction with the generated hand images.
Method
Human Behavior Semantics Extraction
(a) Comparison of hand image generation with VLM-generated caption (top) and human behavior semantics (bottom). Overthinking in VLM captions leads to attention shifts toward irrelevant objects in later denoising steps, while human behavior semantics guide the model to focus on human-related regions, generating more plausible hand images. (b) CoT inference in human behavior semantics extraction pipeline.
Attention analysis. Visualization of attention maps in UNet Decoder early and later blocks with VLM-generated image caption (top) and human behavior semantics (bottom). VLM-generated caption causes attention deviation towards irrelevant objects in later denoising steps, while human behavior semantics enables more focused attention on human- and hand-related regions.
Structural Alignment
(a) Hierarchical Structural Fusion. Multilevel self-attention maps are extracted from the ControlNet encoder and middle blocks, which capture the structural information of the input image. These maps are aggregated and applied to obtain the refined feature fed to the Decoder. (b) Hand Structure Attention Enhancement. Applying the enhancement (bottom) effectively highlights the local structural humanand hand-related features compared to the original cross-attention maps (top).
Qualitative Results
Qualitative comparisons with state-of-the-art text-to-image generation models.
Qualitative comparison of InterWild with and without our generated images on MSCOCO validation set.
Comparison with commercial models. Given a hand mesh image and text prompt, GPT-4o, Gemini 2.0 Flash, Nano Banana, and Nano Banana Pro fail to generate well-aligned, realistic hand images, despite their capability for multiple image generation tasks.
More qualitative results of the hand image generation with hand-object interactions
BibTeX
@inproceedings{
zhao2026sesahand,
title={SesaHand: Enhancing 3D Hand Reconstruction via Controllable Generation with Semantic and Structural Alignment},
author={Zhuoran Zhao and Xianghao Kong and Linlin Yang and Zheng WEI and Pan Hui and Anyi Rao},
booktitle={The Fourteenth International Conference on Learning Representations},
year={2026},
url={https://openreview.net/forum?id=sKMgGQQy7g}
}