Learn how to extract cryptocurrency data from live websites using C# and ScraperAPI — no complex proxy management, no endless retry logic, just straightforward data extraction that actually works.
So you want to scrape web data in C#, but you're tired of dealing with IP bans, captchas, and websites that seem to know you're a bot before you even send your first request. I get it. I've been there, staring at "403 Forbidden" errors at 2 AM, wondering if there's a better way.
There is.
Here's the thing about web scraping: the actual data extraction part? That's usually the easy part. The hard part is dealing with all the anti-scraping measures websites throw at you. ScraperAPI handles that messy stuff so you can focus on what you actually care about — getting the data.
It rotates IP addresses automatically, retries failed requests without you lifting a finger, and handles JavaScript rendering when needed. You send it a URL and your API key, and it sends back clean HTML. Simple as that.
The key features that make this work:
Automatic IP rotation with every request
Built-in retry logic for failed attempts
Full request customization (headers, geolocation, request types)
Session support for multi-page workflows
Unlimited bandwidth (yes, really)
Speed that doesn't make you want to take a coffee break between requests
If you're building anything that needs to extract data at scale — whether it's monitoring competitor prices, tracking market trends, or aggregating information from multiple sources — you need something that handles the infrastructure headaches for you. 👉 Stop fighting anti-bot systems and start collecting data reliably
First things first, you need an API key. Head over to the ScraperAPI signup page. They give you 1,000 free requests per month to test things out, which is plenty for getting started or running small projects.
Once you're in the dashboard, you'll see your API key front and center, along with some sample code. Keep that key handy — you'll need it in a minute.
Let's build something real. We're going to scrape cryptocurrency data from CoinMarketCap — names, prices, market changes, all that good stuff — and dump it into a CSV file you can actually use.
Start with a Console App in .NET Core. Nothing fancy, just a simple console application. You could use any project type really, but console apps are perfect for scripts like this.
You need two NuGet packages:
ScraperAPI — the official C# SDK that makes the API calls dead simple
HtmlAgilityPack — your HTML parsing workhorse that handles messy, real-world HTML without complaining
Open your Package Manager Console and run these:
Install-Package ScraperApi
Install-Package HtmlAgilityPack
Here's where it gets interesting. The GetDataFromWebPage() method does the heavy lifting:
csharp
static async Task GetDataFromWebPage() {
try {
Console.WriteLine("### Started Getting Data.");
string apiKey = "**Add Your API Key Here**";
HttpClient scraperApiHttpClient = ScraperApiClient.GetProxyHttpClient(apiKey);
scraperApiHttpClient.BaseAddress = new Uri("https://coinmarketcap.com");
var response = await scraperApiHttpClient.GetAsync("/");
if (response.StatusCode == HttpStatusCode.OK) {
var htmlData = await response.Content.ReadAsStringAsync();
ParseHtml(htmlData);
}
} catch (Exception ex) {
Console.WriteLine("GetDataFromWebPage Failed: {0}", ex.Message);
}
}
Replace that placeholder with your actual API key. The GetProxyHttpClient() method creates an HTTP client that routes everything through ScraperAPI's infrastructure. You make a normal HTTP request, but behind the scenes, ScraperAPI is rotating IPs, handling retries, and bypassing anti-scraping measures.
The GetAsync() call fetches the page content. If the response is good (status code 200), we grab the HTML and pass it along to the parser.
Raw HTML is useless. You need to extract the actual data you care about. That's where HtmlAgilityPack comes in:
csharp
static void ParseHtml(string htmlData) {
try {
Console.WriteLine("### Started Parsing HTML.");
var coinData = new Dictionary<string, string>();
HtmlDocument htmlDoc = new HtmlDocument();
htmlDoc.LoadHtml(htmlData);
var theHTML = htmlDoc.DocumentNode.SelectSingleNode("html//body");
var cmcTableBody = theHTML.SelectSingleNode("//tbody");
var cmcTableRows = cmcTableBody.SelectNodes("tr");
if (cmcTableRows != null) {
foreach(HtmlNode row in cmcTableRows) {
var cmcTableColumns = row.SelectNodes("td");
string name = cmcTableColumns[2].InnerText;
string price = cmcTableColumns[3].InnerText;
coinData.Add(name, price);
}
}
WriteDataToCSV(coinData);
} catch (Exception ex) {
Console.WriteLine("ParseHtml Failed: {0}", ex.Message);
}
}
Load the HTML, find the table body, iterate through the rows, extract the coin names and prices. The SelectSingleNode() method grabs the first element matching your XPath query, while SelectNodes() gives you a collection of matches.
We're using a Dictionary here to store the name-price pairs, but you could use whatever data structure makes sense for your project.
Finally, take that data and write it somewhere useful:
csharp
static void WriteDataToCSV(Dictionary<string, string> cryptoCurrencyData) {
try {
var csvBuilder = new StringBuilder();
csvBuilder.AppendLine("Name,Price");
foreach(var item in cryptoCurrencyData) {
csvBuilder.AppendLine(string.Format("{0},\"{1}\"", item.Key, item.Value));
}
File.WriteAllText("C:\\Cryptocurrency-Prices.csv", csvBuilder.ToString());
Console.WriteLine("### Completed Writting Data To CSV File.");
} catch (Exception ex) {
Console.WriteLine("WriteDataToCSV Failed: {0}", ex.Message);
}
}
Build the CSV content line by line, then write it all at once. Quick, simple, effective.
Your Main method just needs to kick everything off:
csharp
static async Task Main(string[] args) {
await GetDataFromWebPage();
}
Run it, and you'll get a CSV file with two columns: currency name and current price. From there, you can import it into Excel, feed it into a database, or process it however you need.
When you open that CSV file, you'll see real, usable data — cryptocurrency names and their current prices, cleanly formatted and ready to use. No manual copying, no browser automation, no fighting with rate limits.
The beauty of this approach is that once you have the basic structure down, you can adapt it to scrape almost anything. Different website? Just adjust the parsing logic. Need more data points? Add more columns. Want to run it on a schedule? Wrap it in a scheduled task.
Web scraping in C# doesn't have to be complicated. With ScraperAPI handling the infrastructure and HtmlAgilityPack parsing the content, you can focus on extracting the data you need instead of wrestling with proxies and anti-bot systems. Whether you're tracking prices, monitoring content changes, or aggregating data from multiple sources, this approach gives you a solid foundation to build on. For developers who need reliable, scalable data extraction without the usual headaches, 👉 ScraperAPI makes web scraping straightforward and dependable.