In this project, I used the data provided by (Motivate) https://www.motivateco.com/, a bike share system provider for many major cities in the United States, to uncover bike share usage patterns.I compared the system usage between three large cities: Chicago, New York City, and Washington, DC. I wrote codes to provide the following information:
most common month
most common day of week
most common hour of day
most common start station
most common end station
most common trip from start to end (i.e., most frequent combination of start station and end station)
total travel time
average travel time
counts of each user type
counts of each gender (only available for NYC and Chicago)
earliest, most recent, most common year of birth (only available for NYC and Chicago)
The analysis and result of this project is seen in the ipynb file in the link below
My goal in this project was to wrangle WeRateDogs Twitter data to create interesting and trustworthy analysis and visualizations using python. The following was done during this project:
Gathering data
Assessing and cleaning data
Storing, analyzing, and visualizing your wrangled data
Reporting on my wrangling efforts and my data analyses and visualizations
This data comes from a Kaggle dataset, it tracks the on-time performance of US domestic flights operated by large air carriers in 2015.
I created four visualizations that answered the following questions:
Which airlines or airports have the worst delays
Which airports have the worst delays
What is the most common reason for cancelling flights?
Which states have the most cancelled flights?
I found a dataset on it on Kaggle and was really excited because I really loved the movie so, I decided to get insights from it.
I used a machine learning Model to determine if a passenger on that ship survived or not. Since it was a classification problem, I used the LOGISTIC REGRESSION METHOD to create a suitable model.