Enter DeepSeek-R1
I wanted to put DeepSeek-R1 through its paces as a more efficient local AI model. Using DeepSeek-R1 with Search and R1-Reasoning enabled, I asked it to walk through the process of research and developing a simple, embeddable app that can be used for students in a school to estimate the energy use, carbon footprint, water use and illustrative equivalencies of using various types of AI. Through an iterative process of conversation, testing and refining, the first version of the app was created and deployed on our AI guides. This took about one hour.
But there was a problem...
Although the app worked and some sources were cited, there were many hallucinated links and some of the numbers seemed off. So during the break, I set a 4-hour limit to track down more reliable figures, re-create the app and ensure the sources all pointed to reliable information. This required further chats with DeepSeek-R1 and an extensive thread with Perplexity Pro as a research assistant, as well as a lot of reading and manual calculations.
Eventually we got to a liveable solution. Overall I think the results are skewing high, but the huge variability in energy grid efficiencies and the energy needs of different models made it difficult to produce simple parameters that would still be useful in a school context. As a result, the app has high-conservative estimates (I think), but it serves as an illustration of the different types of impacts of AI usage.
Using DeepSeek for this task was really interesting. Watching it reason through the problem and second-guess itself felt like how I might work through a problem myself I learned a lot about how to approach the problems by scrolling through its "thought" patterns. Here's a cute example as it was trying to work out energy intensity for video generation:
"If each frame is an image, then 1800 *4.51 Wh = 8,118 Wh or 8.118 kWh per minute. That's 8,118 Wh, which is 8.118 kWh. But this seems too high. Alternatively, maybe video generation is more optimized. Alternatively, the initial estimate of 3.5–5.0 kWh per minute could be used, but without a citation. Since the user provided grid intensities, perhaps calculate CO2 based on that energy, but without the energy data, it's impossible. Therefore, I'll have to exclude video generators from the table due to lack of data in the provided search results. Alternatively, include them with a note that the data is estimated from related tasks. But the user might not like that. Hmm."
In the end, I went down a research rabbit-hole and still couldn't get to a good number, so went with the high estimate of frames per minute, as if they were image generations on a reasonably efficient model.
Getting Backup From Claude & Other Models
Once DeepSeek went super viral, it would hit "high traffic" limitations more often. To double-check the coding, make the app responsive to screen size adjustment, I copied the project over to Claude-3.5 Sonnet on Poe.com. I then asked it to create an interactive infographic to help students visualise their AI footprint, using the inputs we had previously determined. It looks OK.
Edit: Very quickly, Perplexity Pro added DeepSeek-R1 as an option, so I gave it a single big prompt to see if it could do it in one go. The result is at the bottom of the methods & about page. Not bad. Soon after, Alibaba's Qwen released their own most powerful model (so far) for free. There are alternative versions made with them on the Alternative Apps page, with the prompt shared below them.
Organising The Research
As a site intended for students, I am aware that this can get overwhelming pretty quickly. The references on the main page have been arranged by topic, and linked to sources that they can follow-up if they need to.
How about my own footprint in this task?
Across different models, this task took 27 text queries and 17 coding tasks (in DeepSeek) and 22 AI searches (in Perplexity Pro), as well as about 4 hours of personal research and work. Using the app's own logic, that is about:
O.4 kWh of electricity
0.145 kg CO2 emissions,
0.7L water,
Or the equivalent of:
🔋 33.0 smartphone charges
💻 5.7 laptop hours
🚗 1.2 km car equivalent
🌳 0.01 trees needed for offset
In this case, the footprint is somewhat justifiable as the task would be impossible (to me) without AI assistance. Research, validation, coding, testing and everything else that went into this would take me at least 20-30 hours of intense laptop use.
How could you use this app?
This is only for estimation and illustration in a school context, and not an industry tool. It is designed to raise awareness and questions about the environmental impacts of AI, and could form the basis for lines of inquiry and discussion. If you try it out with students, please let me know what discussions they bring up.
What do they notice? What do they wonder?
What types of AI have bigger impacts?
What do they want to know more about?
What do they learn from the sources?
How might they be more mindful in their AI use?
How might they mitigate or offset their AI footprints?
About Stephen
Stephen Taylor is the Director of Innovation in Learning & Teaching at the Western Academy of Beijing. With an MA in International Education and a previous life in Marine Biology, he has a strong sense of mission in science education, innovation and research. A current EdD student at the University of Bath, Stephen's early work has focused on ethical frameworks for AIEd, in particular the work of UNESCO. You can find him on LinkedIn, and some more of WAB's work on AI here.