Splunk

start a Splunk test instance with Docker

Searching

search CSV file, get only High scan result for host nyc123, rename Risk output "None" to "Information", get only unique rows

source="/var/splunk_csv/reports/nessus_weekly_linux_scan.csv" host="splunk.local" index="nessus_tmp" sourcetype="csv" Risk="High" extracted_Host="nyc123" | rex field=Risk mode=sed "s/None/Information/g" | dedup "Plugin ID",extracted_Host,Risk

search latest, earliest

earliest=-31d latest=-1d

search host where message is like "libcap"

host="nychost01" | where like (message, "%libpcap%")

docker pull splunk/splunk

docker run -d -e "SPLUNK_START_ARGS=--accept-license" -e "SPLUNK_USER=root" -p "$Host_IP:8000:8000" splunk/splunk

Start/Stop

/opt/splunk/bin# ./splunk start --accept-license

Indexes

clean all data on an index

stop Splunk

./opt/splunk/bin/splunk clean eventdata -index nessus

start splunk

Add new index for a user to search:

settings > access controls > roles > your role > add search index

Use test user to check permissions

Extracted Fields

Generate new field based on search (creates lag_sec field)

setttings > Fields > Field Exractions > search for field, change sharing permission

Charts & Pivot

search index=atlassian _index_earliest=-1d@d _index_latest=@d | eval lag_sec = (_indextime-_time)

order search result by Stats

_index_earliest=-1d@d _index_latest=@d

| search host!=*test

| eval lag_sec = (_indextime-_time)

| eval lag_hrs = lag_sec/(60*60)

| eval delay_hrs = if( lag_hrs > 0.5, lag_hrs, "")

| eval future_sec = if( lag_sec < -1, -1*lag_sec, "")

| eval containsGap = if(delay_hrs!="" OR future_sec!="", "true", "false")

| stats max(delay_hrs),

max(future_sec),

count(eval(containsGap="true")) as countGaps,

count(_raw) as countEvents,

by splunk_server index host sourcetype source

| eval pecentGaps = countGaps / countEvents*100

| where pecentGaps>5

| sort host, sourcetype, source

search using a Macro (configure in Settings > Advanced Search)

| `puppet_status`

get a list of all scheduled searches

| rest /services/saved/searches | where is_scheduled=1

all raw events to table

source="/var/log/messages" host="hydra" | table _raw

Search "ps" sourcetype, exclude a process name, match by hostname, concat various fields into 1 field, organize by Avg, filter by avgCPU Used, sort in reverse, rename a field

(source=ps) process_name!="-bash" host="$hostname$" | eval Process= process_name." ".ARGS | stats avg(process_cpu_used_percent) AS avgCPUUsed by Process, user | where avgCPUUsed > 80 | sort by -avgCPUUsed | rename avgCPUUsed AS "% CPU Used by Process"

Forwarders

add new Forwarder conf to monitor a file,

    1. on box with forwarder, go to /opt/splunkforwarder/etc/apps/

    2. create dir for your app /opt/splunkforwarder/etc/apps/myapp/local/

    3. add inputs.conf

[monitor:///opt/myapp/csv/*.csv]

index=myapp

sourcetype=csv

Add new forwarder

Alerts

create new alert for a search, run every 15 min, send email if threshold reached, include # of occurences

    • Settings > Searches Reports & Alerts > New Alert

    • add a Alert description, for Search enter a search filter

    • Alert Type Scheduled, run on cron schedule

    • Time Range: 15 min

    • Trigger alert when # of Results > 0

    • Send email, Subject = My Alert: $job.resultCount$

    • Message: There were $job.resultCount$ errors reported on $trigger_date$.

Regex

\s - white space (\d\s\d - digit space digit)

\S - not white space

\d - digit (\d\d\d-\d\d-\d\d\d\d - SSN #)

\D - not digit

\w - word character

\W - not word

[...] - any included char ([a-z0-9#] - any char that is A-Z, 0-9 or #)

[^...] - excluded char ([^xyz] - any char but xyz)

* - zero or more

+ - 1 or more

? - zero or 1

| - or (\w|\d - word or digit char)

Create a Pivot

1. do a search for index, use All Time, save the search as a Dataset

2. got to Datasets > select your Dataset, create a new Pivot from dataset

Regex

index=myindex| rex field=myField "(?<ENV>[A-Z]+)"

myField=/opt/jira/plugins/nessus/csv/NYC_windows_server_cgy_host.csv

myField=/opt/jira/plugins/nessus/csv/TX2_linux_server_cal_host.csv

myField=/opt/jira/plugins/nessus/csv/WASH_windows_server_host.csv

ENV=NYC

ENV=TX2

ENV=WASH

Data

Freeze data

edit /opt/splunk/etc/apps/search/local/indexes.conf

add # of days in seconds

frozenTimePeriodInSecs = 15552000

restart server or indexer splunkd service

Troubleshooting/Diagnostics

Forwarder not sending data

    1. check your Search, make sure time frame is All Time

    2. on the indexer

    3. index=_internal host=<forwarder Hostname> log_level=WARN OR log_level = ERROR

    4. index=_internal source=*splunkd.log host=hostname

    5. check stanza and formatting, check source type in the inputs.conf

    6. add the index name to both Search Head and Indexer splunk instances (log in to both consoles, go to Settings > Indexes, add the index)

    7. check indexes file on the Indexer server (/opt/splunk/etc/apps/search/local]# vim indexes.conf)

    8. make sure this matches the index name

    9. on Forwarder, check /opt/splunkforwarder/bin/splunk list monitor

    10. shows all monitored files, make sure file is there

    11. Check if forwarders are connecting to indexer

    12. index=_internal source=*metrics.log* tcpin_connections | stats count by sourceIp

check additional troubelshooting steps here

show where Splunk is sending data to

/opt/splunkforwarder/bin/splunk list forward-server

Show search peers %memory usage

| rest splunk_server=* /services/server/status/resource-usage/hostwide | stats first(normalized_load_avg_1min) as load_average first(cpu_system_pct) as system, first(cpu_user_pct) as user first(mem) AS mem first(mem_used) AS mem_used by splunk_server | fields splunk_server mem mem_used | eval pctmemused=round((mem_used/mem)*100)."%" | table splunk_server pctmemused | rename splunk_server as "Splunk Server" pctmemused as "Percent of Memory Used"

Dashboards & Apps

add Dashboard into App view

settings > Knowledge > user interface > navigation menus > (name of nav for the app) > add name of dashboard

<nav search_view="search">

<view name="metrics" default='true' />

<view name="qb_infra_top_snap_1" />

<view name="qb_infra_user_history" />

<view name="new_dashboard" />

<view name="search" />

<view name="datasets" />

<view name="reports" />

<view name="alerts" />

</nav>

Alerts

Forwarder Not sending data TROUBLESHOOTING

I cant find my data - troubleshooting