Let's look at the implementation of the Neural Networks model on Phishing Dataset.
Let's start by installing Keras, TensorFlow, PennyLane, Pandas, Matplotlib, Scikit-Learn, Seaborn, and NumPy.
Mounting the Google Drive and accessing the dataset from the Google Drive.
These are the steps to mount the Google Drive through code.
First Upload the dataset into the Drive and make note of the dataset path.
Mount Google Drive, so that you can access your files using the path '/content/drive', which corresponds to your Google Drive directory. You can combine it with os functions to list files, read data, and interact with your files stored in Drive.
When you run the above code cell this particular pop-up will appear. Click "Connect to Google Drive".
Then select the Google Drive account in which the dataset is uploaded.
Select "Continue"
Select "Continue" and the drive will be mounted.
Importing pandas for data manipulation. Next, define the path for the dataset. Then load the CSV file into the dataframe and handle the missing values.
Import all the required things.
Reading the data and displaying 5 random samples from the dataset.
Displaying the dataset information.
Here we are preparing the phishing dataset for classification by selecting the most important features. First, we remove unnecessary columns like 'id' and 'CLASS_LABEL'. Then, we use the ANOVA F-value to select the top 10 features that are most relevant for classifying the target variable (the 'CLASS_LABEL'). Finally, we extract and display a sample of the dataset containing only these important features, making the model more efficient and focused on the key predictors.
Preparing the dataset for training a model by scaling and splitting the data. Using StandardScaler to normalize the feature data. This is for data related to classical neural networks.
Preparing the dataset for training a model by scaling and splitting the data. Using MinMaxScaler to scale the feature data to a range between 0 and 2π. This is for data related to quantum neural networks.
Classical Neural Network
Defining a simple feed-forward Neural Network and printing the summary.
Compile and train a feed-forward neural network using the Adam optimizer and binary crossentropy as the loss function.
For, plotting graph between accuracy and validation accuracy.
For, plotting graph between training and validation loss over epochs.
Making predictions and printing the accuracies.
Quantum Neural Network
Set up a 6-qubit quantum circuit using Angle Embedding for data encoding and Strongly Entangling Layers for processing. It measures the Pauli-Z expectation values on each qubit, with random input data and weights used for demonstration.
Implementing the layers.
To suppress all the TensorFlow warnings.
Defining the optimizer and compiling the quantum model.
For, plotting graph between accuracy and validation accuracy.
Making predictions and printing the accuracies.
Plotting to showcase the accuracies of both the classical and quantum models.
Link for the code: Code Link (Time took to run the code: 30-35 min)
Link for the graph: Graph
Link for Dataset: Dataset Link