the first thing that we did was create a FizzBuzz, this code works in 3 ways:
Iteration with a for loop:
The code uses a for loop to go through each number from 1 to 100. The range(1, 101) function generates a sequence of numbers starting from 1 and ending at 100. Each number in this sequence is assigned to the variable i during each loop iteration.
Checking for divisibility with the modulo operator (%):
The core of the FizzBuzz logic relies on the modulo operator (%). This operator returns the remainder of a division. For example, 7 % 3 equals 1, because 7 divided by 3 has a remainder of 1.
The code uses num % 3 == 0 to check if a number i is divisible by 3. If the remainder is 0, it means the number is perfectly divisible. Similarly, num % 5 == 0 checks for divisibility by 5.
Conditional logic with if, elif, and else:
The code uses a series of if, elif (else if), and else statements to apply the FizzBuzz rules:
if num % 3 == 0 and num % 5 == 0:: This is the first condition checked. It verifies if the number is divisible by both 3 and 5. If it is, "fizzbuzz" is printed. Importantly this must be the first condition, otherwise if it was checked last, then the conditions checking for divisibility by only 3 or 5 could be triggered first.
elif num % 3 == 0:: If the previous condition is false, this condition checks if the number is divisible by 3. If it is, "fizz" is printed.
elif num % 5 == 0:: If the previous conditions are false, this condition checks if the number is divisible by 5. If it is, "buzz" is printed.
else:: If none of the previous conditions are true (meaning the number is not divisible by 3 or 5), the else block is executed, and the number itself is printed.
Basic WebScraper in Python.
This web scraper extracts data from the 'Leadership by sector' table on the Wikipedia page for the Fortune Global 500.
Initially, the script imports the Beautiful Soup library, which is essential for parsing HTML and XML documents. Subsequently, it sends a request to retrieve the content of the Wikipedia page and then uses Beautiful Soup to parse and display the HTML structure.
The script then identifies all tables within the page and isolates the fourth table, corresponding to 'Leadership by sector'. Once located, it extracts the table's header row.
The extracted headers are then processed to ensure consistency and are presented as follows:
['Sector', 'Company', 'Headquarters', 'Overall rank', 'Revenues ($M)']
Next, the script imports the Pandas library, a powerful tool for data manipulation and analysis, particularly for creating and managing DataFrames.
An empty Pandas DataFrame, named 'df', is created. The column names of this DataFrame are then assigned using the previously extracted and cleaned header list, 'sector_table_title'.
The script proceeds to extract all data rows from the 'Leadership by sector' table, storing them in a variable named 'column_data'.
Iterating through each row in 'column_data', the script skips the first row, which contains the headers. For each subsequent row, it identifies all data cells (tagged <td>) and stores them in a list called 'row_data'. The current length of the DataFrame is recorded, and a new row is appended to 'df' at this index. The values for this new row are populated from the 'row_data' list.
Finally, the script presents the populated DataFrame, 'df', in a tabular format, displaying the extracted and organized data.