Skip to main content

Splunk Log Forwarding Configuration Steps

Splunk Log Forwarding Configuration Steps:

There are multiple ways to send linux logs to splunk like using splunk linux app, splunk universal forwarder or syslog. Best and performance reliable way is to install splunk universal forwarder on linux machines for which you wish to forward data.Splunk universal forwarder will act as agent for log collection.It will collect logs and will forward to indexer.We can  also use syslog for log collection and then install splunk forwarder on it and then forward data from syslog server to splunk indexer.Below we have provided steps for most reliable method to add linux logs to splunk

For syslog installation and configuration follow steps give at below link:< Refer below steps to add linux logs to splunk

Step 1: On Splunk server (receiver)

Download/install Splunk TA for Unix and Linux to the Splunk server (receiver) and enabled it by going to Manager|Apps|Enable

Step 2: On host you want to collect data from (sender)

Download and install the Splunk Universal Forwarder to the Linux host using steps given at below link:

https://apps.splunk.com/app/833/

Step 3: Open firewall port tcp 9997

- iptables -I INPUT 4 -p tcp --dport 9997 -j ACCEPT

Step 4: Copy the default inputs.conf 

file from $SPLUNK_HOME/splunkforwarder/etc/system/default to $SPLUNK_HOME/splunkforwarder/etc/system/local, check UID/GUIDs are splunk:splunk

Step 5: Edit the inputs.conf file 

located in $SPLUNK_HOME/splunkforwarder/etc/system/local to get something basic set-up.

[default] host = mysender.local [monitor:///var/log/secure] disabled = false

Step 6: Start the splunkforwarder

service start splunk

or 

/etc/init.d/splunk start

Step 7: Tell Splunk that it now has a new sender

/opt/splunkforwarder/bin/splunk add forward-server mysender.local:9997

You should be getting data in Splunk now.

Step 8: Now download and manually install Splunk Technology Add-on 

(TA) for Unix and Linux onto the host to collect data from (sender), it will end up here:

$SPLUNK_HOME/splunkforwarder/etc/apps/Splunk_TA_nix

Step 9: Copy the default inputs.conf 

file from $SPLUNK_HOME/splunkforwarder/etc/Splunk_TA_nix/default/inputs.conf to $SPLUNK_HOME/splunkforwarder/etc/apps/Splunk_TA_nix/local/

cp /opt/splunkforwarder/etc/apps/Splunk_TA_nix/default/inputs.conf /opt/splunkforwarder/etc/apps/Splunk_TA_nix/local/

Step 10:  Edit the inputs.conf file 

Located in /opt/splunkforwarder/etc/apps/Splunk_TA_nix/local/ and enable items you wish to monitor by setting the "disabled" attribute to "false" and then restart splunkforwarder.

Below are few inputs.conf and outputs.conf sample configuration

Inputs.conf example:

[monitor:///var/log/secure]

 disabled = false

 sourcetype = linux_secure

 [monitor:///var/log/messages]

 disabled = false

 sourcetype = syslog

Step 11: Enable receiving on the indexer 

on  port port 9997.On indexer go to setting>>forwarding and receiving >> enable receiving

Step 12: Verify on the splunk if your data is indexed 

by searching for logs or hostname through splunk search Gui.


Comments

Popular posts from this blog

Qradar Scripts and Results Part 1. Disk Space and Qradar Persistent Queue

______________________________________________________________________________________________________ Script examples written by Travis Hutchings thutch901@gmail.com 971.226.6732 _______________________________________________________ Disk Space and Alerting Results specific to Qradar servers and environments. General Linux script concepts can also be applied to system administration concepts. This disk space script utilizes a few Qradar support functions. 1. Disk Space Alerting and results in /store volume List disk space for all Qradar servers: df- h /opt/qradar/support/all_servers.sh -a '15%' 'df -h /store' >> $listFile List the top of the directory to see file dates and times to determine if Persistent Queue is processing: /opt/qradar/support/all_servers.sh -a '15%' 'ls -l /store/persistent_queue/ecs-ec-ingress.ecs-ec-ingress | head -6' >> $listFile Bottom of the Persistent Queue and seeing if results are processing: /opt/qradar/suppor...

Qradar- PSQL Report Development for EPS by log source result

EPS by logsource with QRADAR PSQL query tests and research By Travis Hutchings thutch901@gmail.com 971.226.6732  psql -A -F"," -U qradar -c "select sensordevice.id, sensordevice.hostname, sensordevice.devicename, sensordevicetype.devicetypename, to_timestamp(sensordevice.timestamp_last_seen/1000) from sensordevice, sensordevicetype where sensordevice.devicetypeid = sensordevicetype.id and sensordevice.deviceenabled = 't' and sensordevice.devicename not ilike '%wincollect%' and to_timestamp(sensordevice.timestamp_last_seen/1000) > now() - interval '30 days' order by to_timestamp(timestamp_last_seen/1000) desc" psql -A -F"," -U qradar -c "select sensordevice.id, sensordevice.hostname, sensordevice.devicename, sensordevicetype.devicetypename, to_timestamp(sensordevice.timestamp_last_seen/1000), to_timestamp(round(sensordevice.creationdate/1000) from sensordevice, sensordevicetype where sensordevice.devicetypeid = sensordevicety...