In part 1 I showed you how to manually import rumble data into a Splunk and use spl to do a lookup for us to enrich events with asset information. Here I am going to explain how to change this into an automatic process.
To get started I downloaded rumble to my Splunk box, which runs on Linux.
sudo curl -o /usr/local/bin/rumble https://console.rumble.run/download/scanner/f6db7b197ae260438bcef5e2f303c316/5d8c987d/rumble-scanner-linux-amd64.bin && sudo chmod u+x /usr/local/bin/rumble && sudo rumble version
On a windows machine you can just click this link right here. (assuming you run a 64bit OS).
We could instruct splunk to run Rumble on a schedule, and import the results. However Rumble wants to run with root or admininstrative privalige on your system, which means we would have to run Splunk with elevated privilege. Something I would not suggest.
To circumvent this I made sure my user can sudo rumble without the need of credentials. You can achive this by running:
sudo visudo
And add following line to this config:
<user-name> ALL=NOPASSWD: /usr/local/bin/rumble
I created a folder called rumble-things under my home folder. Here I put the following script called rumble-script.sh
#!/bin/bash
rm -f scan.rumble
sudo rumble --text -o disable --output-raw ~/rumble-things/scan.rumble lan 192.168.1.0/24
chown <username> scan.rumble
The script needs to be exectubable and chmod +x achieves this. I thought to start one scan every 5 minutes would be a good idead and added following line to my crontab using crontab -e:
*/5 * * * * /home/<username>/rumble-things/rumble-script.sh
Now the fun begins! I could ofcourse have the script parse the JSON rumble spits out and update the lookup file created in part#1. But where's the fun in that. Instead I am going to ingest this file into an index in Splunk.
By clicking settings > add data I can tell Splunk to monitor a file. In the following menu I can configure the path to that file.
Next Splunk will read the file, but it won't look sensible. But after tellign Splunk this file is _json it all falls into place.
In the input settings dialogue we can associate this "input" with a specific app context, and overwrite the host. For now I am having this land in the Splunk default app. It is a good idea to have a specific App Context for this configuration. This allows easy porting of your config to a new machine. I chose to make this data land in an index called rumble. By clicking the Create new index button the New Index dialog pops up. You can set things as the size of the index and it's location.
After reviewing the settings we can start searching our new data!