The article discusses the development of EXPLORER, a neurosymbolic agent designed for text-based games (TBGs) that require both natural language understanding and reasoning. Traditional deep reinforcement learning (RL) struggles with unseen objects, but EXPLORER incorporates a neural exploration module to improve performance, alongside a symbolic module for policy exploitation. This approach allows EXPLORER to generalize across different games effectively and achieve superior results in experiments involving cooking and commonsense games, thereby addressing limitations in current RL methodologies.
Text-based games (TBGs) represent a significant aspect of NLP, where agents must combine natural language understanding with reasoning, linking linguistic elements to decision-making.
EXPLORER integrates a neural exploration module and a symbolic exploitation module, allowing it to learn generalized symbolic policies and excel at tasks across both seen and unseen contexts.
#reinforcement-learning #neurosymbolic-ai #text-based-games #natural-language-processing #ai-research
Collection
[
|
...
]