“Tom Gibbons in his office.”
“Rachael Platt wearing her 3D-printed Iron Man armor.”
CSS community members may be surprised to learn that CSS offers a 3D Printing service. That service, available on services.css.edu, is open to all members of the CSS community. Some of their projects include prosthetics and assistive devices, custom brackets for IT systems and fidget toys. Interested users can create their own 3D models, find models created by others online, or even generate them using AI.
Last fall, Rachael Platt, who manages the 3D Printing service, said, “This 3D modeling based on AI—that’s new. Within the last month is where it’s really started to kick off.” Platt had used an AI prompt to generate an image of a ‘Pumpkin Bulbasaur’ pokemon, which was then converted into a 3D model suitable for printing.
The AI did not quite achieve the concept Platt was going for, but even so, it developed an image and model that did a recognizable job. The technology is improving rapidly and is now able to generate a rough approximation of a concept in only three tries.
Commenting on how even formal education on the subject has advanced, Platt said, “[Tom Gibbons] taught that when I was in undergrad, too. It was called ‘Robotics and Artificial Intelligence,’ I believe… It’s totally different now.” She continued, “We were experimenting with Google Maps, and how to teach the turtlebot how to get somewhere faster. We were taking maps of Tower Hall and teaching our little turtlebot how to roam campus without going down the stairs.” That tie-in with robotics is still used in the classroom, though not quite in the same ways. Tom Gibbons elaborated, “We do a little bit of outreach with what are called Sphero robots. … Those are little programmable balls that you can program to drive around. That’s targeted more at K-12. They’re teaching the beginning of block programming. But we also have some students working on a project: we have a small turtlebot robot that runs an operating system for robotics, more advanced. We’re trying to tie that into an LLM (Large Language Model).”
That sort of integration is a focal point for growth in AI tools. “Now, what we’re doing and seeing, is that you can use an LLM, give it a general direction, and have it translate into instructions that the machine will follow,” Gibbons explained, continuing, “It’s basically translating between humans and robots.” “They’re getting to the point now where robots are being trained with just some videos of how to pick things up.”
That advancement is a layering of numerous AI tools—an AI must first translate the video into something a computer can interpret, which itself is several layers of machine learning models. The layering of AI tools is likewise present in the 3D model generation: the tool first translates a prompt into an image, then, if the image is approved, generates the 3D model. Both robotics and 3D printing, then, are becoming a physical manifestation of AI in the world.