Enter DeepSeek-R1 (Jan 2025)
I wanted to put DeepSeek-R1 through its paces as a more efficient local AI model. Using DeepSeek-R1 with Search and R1-Reasoning enabled, I asked it to walk through the process of research and developing a simple, embeddable app that can be used for students in a school to estimate the energy use, carbon footprint, water use and illustrative equivalencies of using various types of AI. Through an iterative process of conversation, testing and refining, the first version of the app was created and deployed on our AI guides. This took about one hour.
But there was a problem...
Although the app worked and some sources were cited, there were many hallucinated links and some of the numbers seemed off. So during the break, I set a 4-hour limit to track down more reliable figures, re-create the app and ensure the sources all pointed to reliable information. This required further chats with DeepSeek-R1 and an extensive thread with Perplexity Pro as a research assistant, as well as a lot of reading and manual calculations.
Eventually we got to a liveable solution. Overall I think the results are skewing high, but the huge variability in energy grid efficiencies and the energy needs of different models made it difficult to produce simple parameters that would still be useful in a school context. As a result, the app has high-conservative estimates (I think), but it serves as an illustration of the different types of impacts of AI usage.
Using DeepSeek for this task was really interesting. Watching it reason through the problem and second-guess itself felt like how I might work through a problem myself I learned a lot about how to approach the problems by scrolling through its "thought" patterns. Here's a cute example as it was trying to work out energy intensity for video generation:
"If each frame is an image, then 1800 *4.51 Wh = 8,118 Wh or 8.118 kWh per minute. That's 8,118 Wh, which is 8.118 kWh. But this seems too high. Alternatively, maybe video generation is more optimized. Alternatively, the initial estimate of 3.5–5.0 kWh per minute could be used, but without a citation. Since the user provided grid intensities, perhaps calculate CO2 based on that energy, but without the energy data, it's impossible. Therefore, I'll have to exclude video generators from the table due to lack of data in the provided search results. Alternatively, include them with a note that the data is estimated from related tasks. But the user might not like that. Hmm."
In the end, I went down a research rabbit-hole and still couldn't get to a good number, so went with the high estimate of frames per minute, as if they were image generations on a reasonably efficient model.
Getting Backup From Claude & Other Models
Once DeepSeek went super viral, it would hit "high traffic" limitations more often. To double-check the coding, make the app responsive to screen size adjustment, I copied the project over to Claude-3.5 Sonnet on Poe.com. I then asked it to create an interactive infographic to help students visualise their AI footprint, using the inputs we had previously determined. It looks OK.
Edit: Very quickly, Perplexity Pro added DeepSeek-R1 as an option, so I gave it a single big prompt to see if it could do it in one go. The result is at the bottom of the methods & about page. Not bad. Soon after, Alibaba's Qwen released their own most powerful model (so far) for free. There are alternative versions made with them on the Alternative Apps page, with the prompt shared below them.
January 2026: Updates with Z.AI
With more models coming online, and more data being released and processed about AI usage, it was time for an update. Testing Z.AI's Full Stack mode, along with Deep Research and adding in references from my EdD research, I had another go, starting from scratch. I wanted to make an app that was able to teach-as-you-go, and include considerations such as relative impacts of different tools, suggestions for reducing footprints and adjusting for different types of grid and efficiency. This is now the app on the front page. It was too complex to build in HTML alone, so we used a mixture of JS and other elements. After multiple iterations, I was able to download the repository, host with GitHub and deploy with Vercel. I hope you like it. I have made the repository public here.
Organising The Research
As a site intended for students, I am aware that this can get overwhelming pretty quickly. The references on the main page have been arranged by topic, and linked to sources that they can follow-up if they need to.
How about my own footprint in this task?
The original task took 30 text queries, 2 image generations (for the icon) and 17 coding tasks (in DeepSeek) and 22 AI searches (in Perplexity Pro), as well as about 4 hours of personal research and work. Updating with Z.AI "Full Stack" mode took an additional 18 coding runs and 3 deep researches, with 3 data analysis tasks and another 7-8 hours. Using the app's own logic, that is about:
1.52 kWh of electricity
0.68 kg CO2 emissions,
2.88L water,
Or the equivalent of:
🔋 101.5 smartphone charges
💡 25.4 hours of LED light
🚗 2.7 km car equivalent
🌳 0.4 months of tree growth
In this case, the footprint is somewhat justifiable as the task would be impossible (to me) without AI assistance. Research, validation, coding, testing and everything else that went into this would take me at least 60-80 hours of intense laptop use and personal impacts.