Performance Analysis of Tabular and Fuzzy Q-learning Under Varying State and Action Space Resolution

Authors

  • Roman Zajdel Rzeszow University of Technology

Abstract

Reinforcement learning (RL) algorithms, such as Q-learning, are widely applied to control tasks involving continuous state spaces that require discretization or function approximation. However, the effect of state and action space resolution on learning efficiency and convergence stability remains a significant challenge, particularly when comparing classical tabular approaches with fuzzy function approximations.
This study presents an in-depth experimental analysis of Q(0)-learning and trace-based Q(λ)-learning, applied to three benchmark control problems: Cart--Pole, Ball--Beam, and Mountain Car. The experiments systematically investigate how increasing the granularity of state discretization (number of bins), the number of fuzzy sets, and the size of the action space influence convergence speed and result variance.
The results clearly demonstrate that Q(λ)-learning consistently outperforms Q(0)-learning in both tabular and fuzzy settings, providing faster convergence and greater stability at higher discretization resolutions. Furthermore, fuzzy Q(λ)-learning exhibits superior scalability and generalization capabilities, particularly for complex underactuated systems such as Ball--Beam.
These findings highlight the practical advantages of combining eligibility traces with fuzzy state representation in reinforcement learning. This approach supports the design of more robust controllers for real-world dynamic systems.

Additional Files

Published

2026-02-17

Issue

Section

Applied Informatics