15-Retrieving and Visualizing Data

Retrieving and Visualizing Data

Many Data Mining Technologies

• https://hadoop.apache.org/

• http://spark.apache.org/

• https://aws.amazon.com/redshift/

• http://community.pentaho.com/

"Personal Data Mining"

• Our goal is to make you better programmers – not to make you

data mining experts

GeoData

• Makes a Google Map from user entered data

• Uses the Google Geodata API

• Caches data in a database to avoid rate limiting and allow restarting

• Visualized in a browser using the Google Maps API

Page Rank

• Write a simple web page crawler

• Compute a simple version of Google's Page Rank algorithm

• Visualize the resulting network

Search Engine Architecture

• Web Crawling

• Index Building

• Searching

Web Crawler

A Web crawler is a computer program that browses the

World Wide Web in a methodical, automated manner.

Web crawlers are mainly used to create a copy of all the

visited pages for later processing by a search engine that

will index the downloaded pages to provide fast searches.

Web Crawler

• Retrieve a page

• Look through the page for links

• Add the links to a list of “to be retrieved” sites

• Repeat...

Web Crawling Policy

• a selection policy that states which pages to download,

• a re-visit policy that states when to check for changes to the pages,

• a politeness policy that states how to avoid overloading Web sites, and

• a parallelization policy that states how to coordinate distributed Web crawlers

robots.txt

• A way for a web site to communicate with web crawlers

• An informal and voluntary standard

• Sometimes folks make a “Spider Trap” to catch “bad” spiders

User-agent: *

Disallow: /cgi-bin/

Disallow: /images/

Disallow: /tmp/

Disallow: /private/

Search Indexing

Search engine indexing collects, parses, and stores data to

facilitate fast and accurate information retrieval. The

purpose of storing an index is to optimize speed and

performance in finding relevant documents for a search

query. Without an index, the search engine would scan

every document in the corpus, which would require

considerable time and computing power.

Mailing Lists - Gmane

• Crawl the archive of a mailing list

• Do some analysis / cleanup

• Visualize the data as word cloud and lines

Warning: This Dataset is > 1GB

• Do not just point this application at gmane.org and let it run all night

• There is no rate limits – these are cool folks

• Don't ruin it for the rest of us

• Please use my non-rate-limited copy of this data for your testing

http://mbox.dr-chuck.net/sakai.devel/4/5