VLA-IAP: Training-Free Visual Token Pruning via Interaction Alignment for Vision-Language-Action Models

Jintao Cheng1, , Haozhe Wang3, , Weibin Li3, , Gang Wang2, Yipu Zhang1, Xiaoyu Tang3, Jin Wu5, Xieyuanli Chen4, Yunhui Liu2, Wei Zhang1,
1The Hong Kong University of Science and Technology
2The Chinese University of Hong Kong
3South China Normal University
4National University of Defense Technology
5University of Science and Technology Beijing
Corresponding Author
Teaser Image

Comparison of Perception-First vs. Interaction-First token pruning paradigms. Perception-First baselines (Top) prematurely lose the manipulation target due to early semantic misalignment—a vulnerability that simple temporal stacking Temporal branch fails to resolve without explicit interaction modeling. In contrast, our VLA-IAP (Bottom) shifts to an Interaction-First approach. By coupling geometric priors with an IoU-aware dynamic strategy—transitioning from conservative (background-only) to aggressive (full) pruning—VLA-IAP successfully preserves the physical target for precise execution.

Abstract

Vision-Language-Action (VLA) models have rapidly advanced embodied intelligence, enabling robots to execute complex, instruction-driven tasks. However, as model capacity and visual context length grow, the inference cost of VLA systems becomes a major bottleneck for real-world deployment on resource-constrained platforms. Existing visual token pruning methods mainly rely on semantic saliency or simple temporal cues, overlooking the continuous physical interaction, a fundamental property of VLA tasks. Consequently, current approaches often prune visually sparse yet structurally critical regions that support manipulation, leading to unstable behavior during early task phases. To overcome this, we propose a shift toward an explicit Interaction-First paradigm. Our proposed training-free method, VLA-IAP (Interaction-Aligned Pruning), introduces a geometric prior mechanism to preserve structural anchors and a dynamic scheduling strategy that adapts pruning intensity based on semantic-motion alignment. This enables a conservative-to-aggressive transition, ensuring robustness during early uncertainty and efficiency once interaction is locked. Extensive experiments show that VLA-IAP achieves a 97.8% success rate with a 1.25× speedup on the LIBERO benchmark, and up to 1.54× speedup while maintaining performance comparable to the unpruned backbone. Moreover, the method demonstrates superior and consistent performance across multiple model architectures and three different simulation environments, as well as a real robot platform, validating its strong generalization capability and practical applicability.

Comparison of pruning paradigms

Overview of the proposed interaction-aligned dynamic strategy for vision—language action. Given consecutive visual frames and a language instruction, a vision encoder extracts patch features, while three complementary priors are constructed: semantic prior S, motion prior M (via Gaussian modeling, history accumulation, and morphology), and geometric prior G (Sobel-based edge enhancement). The priors are projected and fused, and an IoU-based alignment score is computed to adaptively select background-only filtering or conservative/aggressive masking, producing a union mask (S ∪ M) . After final token selection, the resulting visual tokens are fed into a VLA LLM/policy to generate the robot action.

Experiments

Overall Performance

Overall Performance Visualization

Figure 1: Overview of qualitative results. Visualization of successful manipulation rollouts across all evaluation environments, including LIBERO, CALVIN, VLABench, and Real-World tasks.


Comprehensive Performance Comparison

Comprehensive Performance Comparison Table

Table 1: Comprehensive Performance Comparison. We evaluate DreamVLA, π0 (LIBERO) and π0.5 (VLABench) across multiple benchmarks under varying token retention ratios. Ours (VLA-IAP) demonstrates superior overall robustness and higher average success rates (↑), especially in complex reasoning tasks.


OpenVLA-OFT Results on LIBERO Benchmark

OpenVLA-OFT Results Table

Table 2: OpenVLA-OFT Results on LIBERO Benchmark. Comparison of VLA models (Part I) and pruning methods (Part II) using OFT (7B) backbone.


Memory and Runtime Analysis

Memory and Runtime Analysis Table

Table 3: Memory and Runtime Analysis of Acceleration Methods on π0 across VLABench. Detailed comparison of maximum GPU memory consumption and CUDA runtime across different vision-token retention rates.

Rollouts of VLA-IAP

LIBERO

Spatial-Comparison

Object-Comparison

Goal-Comparison

Long-Comparison

Spatial (Baseline)

Object (Baseline)

Goal (Baseline)

Long (Baseline)

Spatial (VLA-IAP)

Object (VLA-IAP)

Goal (VLA-IAP)

Long (VLA-IAP)

VLABench

Add Condiment (Baseline)

Select Poker (Baseline)

Select Chemitry tube (Baseline)

Add Condiment (VLA-IAP)

Select Poker (VLA-IAP)

Select Chemistry tube (VLA-IAP)

CALVIN

Full Sequence

Open Drawer (Baseline)

Lift Red Block (Baseline)

Open Drawer (VLA-IAP)

Lift Red Block (VLA-IAP)

Real-World

Bread Simple

Dual Arm (Baseline)

Dual Arm (VLA-IAP)

Bread Long (Baseline)

Bread Long (VLA-IAP)

```

BibTeX

@article{cheng2026vlaiap,
  title={VLA-IAP: Training-Free Visual Token Pruning via Interaction Alignment for Vision-Language-Action Models},
  author={Cheng, Jintao and Wang, Haozhe and Li, Weibin and Wang, Gang and Zhang, Yipu and Tang, Xiaoyu and Wu, Jin and Chen, Xieyuanli and Liu, Yunhui and Zhang, Wei},
  journal={arXiv preprint arXiv:2603.22991},
  year={2026}
}