Pular para o conteúdo principal

Snort 3.0 with ElasticSearch, LogStash, and Kibana (ELK)

The Elastic Stack, consisting of Elasticsearch with Logstash and Kibana, commonly abbreviated "ELK", makes it easy to enrich, forward, and visualize log files.  ELK is especially good for getting the most from your Snort 3.0 logs.  This post will show you how to create a cool dashbaord:




The dashboard shows the following:
  • bring_da_heat - a heat map that plots event priority vs classification
  • apple_pie - a pie chart that shows total bytes transferred by app
  • greatest_hits - a data table that shows the rules generating the most events
  • global_hot_spots - a geo plot of the event source address*
  • size_o_gram - a histogram of logged packet / buffer sizes

Get Started

To get started, you will need to install the following:
Go ahead and get Snort 3.0 and ELK installed now if you haven't done so already.  There is plenty of help for that available elsewhere.  Some things to note:
  • The github repo is updated multiple times per week and the master branch is always clean so that is the best way to get Snort 3.0.
  • The base appid module is built into Snort 3.0 but you will need Open App ID to get the Lua detector plugins.
  • You can use the community rules in 3.0 format or translate other 2.X rules with snort2lua.

Run Snort

The next step is to get Snort running and generating events and app stats.  Add the following to the default config file (after the -c argument below):

appid =
{
    log_stats = true,
    app_detector_dir = 'ODP'
}

alert_json =
{
    fields = 'timestamp pkt_num proto pkt_gen pkt_len dir src_addr src_port dst_addr dst_port service rule priority class action b64_data'
}

The tokens in bold above and below are as follows:
  • ODP is the path where you installed Open App ID.  Note this path does not include the trailing /odp.
  • INSTALL is the install prefix you used when configuring your Snort 3.0 build.
  • RULES is the path containing the community rules.
  • PCAP is your favorite pcap.  You could use -i <iface> instead. 
This command will process your pcap and generate alerts.json and app_stats.log files in your current directory:

INSTALL/bin/snort \
-c INSTALL/etc/snort/snort.lua \
-R RULES/snort3-community.rules \
--plugin-path INSTALL/lib \
-r PCAP \
-A json -y -q > alerts.json

The JSON events are determined by the configured fields to look like this:

{ "timestamp" : "03/08/01-04:21:07.583700", "pkt_num" : 737, "proto" : "UDP", "pkt_gen" : "raw", "pkt_len" : 161, "dir" : "C2S", "src_addr" : "192.168.16.222", "src_port" : 3076, "dst_addr" : "239.255.255.250", "dst_port" : 1900, "service" : "unknown", "rule" : "1:1917:15", "priority" : 3, "class" : "Detection of a Network Scan", "action" : "allow", "b64_data" : "TS1TRUFSQ0ggKiBIVFRQLzEuMQ0KSG9zdDoyMzkuMjU1LjI1NS4yNTA6MTkwMA0KU1Q6dXJuOnNjaGVtYXMtdXBucC1vcmc6ZGV2aWNlOkludGVybmV0R2F0ZXdheURldmljZToxDQpNYW46InNzZHA6ZGlzY292ZXIiDQpNWDozDQoNCg==" }

The app stats are in csv format with Unix timestamp, app, bytes to client, and bytes to server:

1059733200,FTP Data,4441712,185694921

Run ELK

Now lets process these logs with the elastic stack.  Start by running elasticsearch and kibana as follows:

cd elasticsearch-5.5.1/
bin/elasticsearch -v &

cd kibana-5.5.1-darwin-x86_64
bin/kibana &

I've got version 5.5.1 of ELK installed on OS X.  Adjust your paths as needed for your install of ELK.  We are using the default ports of 9200 for elasticsearch and 5601 for kibana.  You may need to adjust on your system.

Now we are ready to send the logs to elasticsearch using logstash.  Get the config files here.  Edit alert_json.txt and alert_apps.txt and set the path on the 3rd line to point to your log files.  Then you can run logstash like this:

cd logstash-5.5.1/
bin/logstash -f snort_json.txt &
bin/logstash -f snort_apps.txt &

Visualize

The logstash commands will populate the logstash-snort3j and logstash-snort3a indexes in elasticsearch.  At this point we can start working on the dashboard using kibana.  Point your browser to http://localhost:5601/ and follow these steps:
  1.   Click on the gear (Management), Index Patterns, + Create Index Pattern, set the name logstash-snort3j, and then click Create.
  2.   Edit b64_data (click pencil on right), set Format = String and Transform = Base64 Decode, and then click Update Field.
  3.   Click on the gear (Management), Index Patterns, + Create Index Pattern, set the name logstash-snort3a, and then click Create.
  4.   Click the scripted fields tab, + Add Scripted Field, set Name = app_total_bytes and Script = doc['bytes_to_client'].value+doc['bytes_to_server'].value and then click Create Field.
At this point you can click on the icons on the left for Discover, Visualize, and Dashboard to view the raw data, create tables, charts, etc., and build a dashboard.  This is really best done by just exploring and experimenting, however you can import the dashboard shown above by clicking Management, Saved Objects, Import and selecting snort_dash.json.  Tip: base your visualizations off saved searches so that you don't lose them when the data is deleted.

snort_csv.txt is also provided for use with snort -A csv if you want to process alerts in csv format.  The index name for that is logstash-snort3.

* Snort 3.0 supports the target rule option, so use that instead of source address if your rules have targets.  That gets the attacker correct for shellcode, etc.

Comentários

Postagens mais visitadas deste blog

Upgrading Iomega ix2-200 to Cloud Edition

You just got your ix2-200 from eBay and there are no disks inside the NAS. Or you have a brand new ix2-200 -yet you could not afford Cloud Edition. No problem. With just a USB stick and a SATA adapter or desktop PC, you will easily upgrade your ix2-200 to ix2-200 Cloud Edition. Not only your ix2-200 will have a brand new interface and Cloud options, but also will become Mac OS X Lion compatible! What do we need? Decrypted! ix2-200 Cloud Edition Firmware 3.1.12.47838 S endSpace or RapidShare * USB Flash Drive with at least 2 GB capacity and LED indicator** SATA to USB adapter or desktop PC Toothpick or paperclip Preparing Hard Drives Preparing hard drives is the first step because you have to wipe all the data inside the hard drives and make them just like brand new. We used 2 x Seagate 2 TB 5900 RPM Drives. Backup any files if you have and then remove both disks from ix2-200 and attach them to SATA to USB adapter or your desktop PC's SATA port. Using ...

How to Fix sub-process /usr/bin/dpkg returned an error code (1)

Introduction The error message “Sub-process /usr/bin/dpkg returned an error code (1)” indicates a problem with the package installer. This can happen in Ubuntu after a failed software installation, or if the installer becomes corrupted. The key phrase in this error is /usr/bin/dpkg. This refers to the dpkg package installer for Linux. A package installer is an application that tracks software, updates, and dependencies. If it is damaged, any new software installation will cause this error message. We cover several possible solutions, from easily-solved and straightforward solutions to more complex processes. This guide will help you resolve the dpkg returned an error code 1 on an Ubuntu operating system. Prerequisites A user account with sudo privileges A terminal window/command-line ( Ctrl - Alt - T ) Options to Fix sub-process /usr/bin/dpkg returned an error code (1) Method 1: Reconfigure dpkg Database ...

How to Create Reports from Audit Logs Using ‘aureport’ on CentOS/RHEL

  What is aureport? aureport is a command line utility used for creating useful summary reports from the audit log files stored in /var/log/audit/ . Like ausearch , it also accepts raw log data from stdin. It is an easy-to-use utility; simply pass an option for a specific kind of report that you need, as shown in the examples below. Create Report Concerning Audit Rule Keys The aurepot command will produce a report about all keys you specified in audit rules, using the -k flag. # aureport -k Report Audit Rule Keys You can enable interpreting of numeric entities into text (for example convert UID to account name) using the -i option. # aureport -k -i Create Report About Attempted Authentications If you need a report about all events relating to attempted authentications for all users, use the -au option. # aureport -au OR # aureport -au -i   Summary of Login Authentication Produce Report Concerning Logins The -l option tells aureport to ge...