Our approach focuses on learning equivariant representations that effectively facilitate continual learning by improving the network's ability to generalize across tasks and avoid forgetting.
By implementing two optimization objectives for our adaptation of SimCLR, we were able to enhance representation quality, allowing for better classification performance in continual learning frameworks.
The comparison between equivariant and invariant learning indicates that focusing on equivariance provides a significant advantage for achieving effective continual learning in dynamic environments.
Our experiments suggest that leveraging equivariant representations allows us to maintain performance on previous tasks while still learning new ones, which is crucial for class-incremental learning.
#continual-learning #equivariant-representation #invariant-learning #machine-learning #class-incremental-learning
Collection
[
|
...
]