Tracking where your website ranks for specific keywords is a crucial part of any SEO strategy. Marketing teams often rely on expensive monitoring tools that cost hundreds of dollars per month. For startups, solo entrepreneurs, or small businesses working with tight budgets, these tools can quickly become a financial burden.
The good news? You can build your own keyword rank tracker using Python. In this guide, we'll walk through creating a simple web scraper that monitors your Google rankings for any keyword you choose. No expensive subscriptions required.
Before we dive into the code, make sure Python is already installed on your system. We'll be using two essential libraries for this project:
Requests – handles HTTP requests to fetch web pages
BeautifulSoup – parses HTML and extracts the data we need
Create a new project folder and install these libraries:
mkdir googlescraper
pip install requests
pip install beautifulsoup4
Now create a Python file called googlescraper.py and import the libraries:
python
import requests
from bs4 import BeautifulSoup
Google uses a straightforward URL pattern for search results: https://www.google.com/search?q={your keyword}. For our example, we'll track the keyword "scrape prices" and find where the domain christian-schou.dk ranks for this search term.
When you're building scrapers for search engines, you need to handle large-scale data extraction carefully. If you're planning to track multiple keywords or run frequent checks, 👉 consider using a reliable web scraping API that handles proxies and headers automatically to avoid getting blocked.
Let's set up our initial request with proper headers to mimic a real browser:
python
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36',
'referer': 'https://www.google.com'
}
target_url = 'https://www.google.com/search?q=scrape+prices'
resp = requests.get(target_url, headers=headers)
print(resp.status_code)
If you see 200 in your terminal, the request was successful.
Google structures its search results in specific HTML classes. The page URLs are nested inside class jGGQ5e, then within yuRUbf. Inside that class, we'll find the anchor tag containing the actual URL in the href attribute.
Here's how to extract these URLs:
python
soup = BeautifulSoup(resp.text, 'html.parser')
results = soup.find_all("div", {"class": "jGGQ5e"})
The results array now contains the HTML for the top 10 search results. Let's loop through them and find our target domain:
python
from urllib.parse import urlparse
for x in range(0, len(results)):
domain = urlparse(results[x].find("div", {"class": "yuRUbf"}).find("a").get("href")).netloc
if domain == 'blog.christian-schou.dk':
found = True
position = x + 1
break
else:
found = False
position = x + 1
if found:
print(f"Found at position {position}")
else:
print(f"Not found in top {position} results")
The urlparse library extracts just the domain name from the full URL, making it easy to match against our target domain.
Running this code might show "not found" if your domain isn't in the top 10 results. To check the top 20 results instead, add &num=20 to your Google URL:
python
target_url = 'https://www.google.com/search?q=scrape+prices&num=20'
When tracking rankings across different result pages or monitoring competitor positions, you'll need to make multiple requests efficiently. 👉 A dedicated scraping solution can help you scale this process without worrying about rate limits.
In my test, the domain appeared at position 18. Keep in mind that rankings vary by country and personalization factors, so results will differ based on your location.
Checking your rankings manually every day gets tedious fast. Let's automate this process using the schedule library:
bash
pip install schedule
Here's a simple test that runs every 5 seconds:
python
import schedule
import time
def tracker():
# Your tracking code here
print("Checking rankings...")
schedule.every(5).seconds.do(tracker)
while True:
schedule.run_pending()
time.sleep(1)
To run it once per day at a specific time, change the schedule line to:
python
schedule.every().day.at("12:00").do(tracker)
Getting your ranking data automatically is great, but receiving it directly in your inbox every morning is even better. We'll use Python's built-in smtplib library to send email notifications:
python
import smtplib
def mail(position):
message_text = f"Your ranking update: {position}"
server = smtplib.SMTP('smtp.gmail.com', 587)
server.ehlo()
server.starttls()
server.login("your_email@gmail.com", "your_password")
subject = "Daily Ranking Alert"
message = f'From: your_email@gmail.com\nSubject: {subject}\n\n{message_text}'
server.sendmail("your_email@gmail.com", "recipient@gmail.com", message)
server.quit()
Integrate this into your tracker function, and you'll receive automatic updates about your keyword positions.
One challenge: if you close your terminal, the scheduler stops working. To keep it running in the background, use nohup on Linux/Mac:
bash
nohup python googlescraper.py &
This command tells the system to ignore hangup signals and keep your script running even after you log out.
This basic rank tracker is just the starting point. You could expand it to:
Monitor multiple keywords simultaneously
Track competitor rankings alongside your own
Store historical data in a database to identify trends
Create visual charts showing ranking changes over time
Send alerts only when positions change significantly
The same scheduling approach works for other automation projects too, like monitoring stock prices, tracking news about specific topics, or checking website availability.
You've now built a functional keyword rank tracker using Python, completely free of monthly subscription fees. With just four libraries (requests, BeautifulSoup, schedule, and smtplib), you can monitor your SEO progress daily without breaking the bank.
The beauty of building your own tools is the flexibility to customize them exactly how you need. Whether you're tracking one keyword or a hundred, this foundation gives you complete control over your SEO monitoring.