A collaborative robot should be able to learn novel task specifications from its users to be a general purpose, programmable device. To learn novel tasks from people we must enable robots to learn 1) knowledge representations that can be leveraged for efficient planning and skill learning and; 2) mechanisms for natural language communication that enable the robot to understand a human partner's intent. In this work, I solve both of these problems. I show how representations for planning and language grounding can be learned together to follow commands in novel environments. This approach provides a framework to teach robots unstructured tasks via language to enable deployment of cooperative robots in homes, offices and industries.