Zero Dollar Detection and Response Orchestration with n8n, Security Onion, TheHive, and Velociraptor

Introduction

The following article serves as an example of how we can leverage free tools to automatically search for these pieces of information across thousands of endpoints, take response actions, and enrich existing datasets when found.

Tools Overview

n8n — a free and open workflow automation tool, that possesses many different built-in integrations, as well as the ability to easily extend existing capabilities.

Velociraptor — a highly extensible and flexible, free and open-source tool for endpoint detection and response, built on a powerful language called VQL, with development led by Mike Cohen (previous work includes that of Google GRR, Rekall, and more).

Security Onion — a free and open platform for intrusion detection, enterprise security monitoring, and log management. Started by Doug Burks, and first released in 2009, Security Onion has undergone several iterations, while helping defenders to peel back the layers of their networks. The latest version, named Security Onion 2, consists of various components, such as Suricata, Zeek, Google Stenographer, TheHive/Cortex, Osquery, Wazuh, Strelka, and the Elastic Stack.

TheHive is a free and open source platform for security incident response, that allows analysts to track incidents via cases, as well as enrich observables via Cortex for greater context. TheHive is also tightly integrated with MISP for case and threat intelligence correlation. In short, TheHive provides a mechanism for security teams to receive and triage alerts, enrich data included with those alerts (or found throughout triage), and track activities along the way.

TheHive is included with Security Onion (if enabled), and centers around the idea of alerts (indications that something might be amiss) and cases (investigations into alerts or something that warrants further inspection).. Alerts are usually generated by external sources (sometimes called “feeders”) and can also be merged into cases, if desired.

In Security Onion, tools like Suricata, Playbook, or Wazuh generate alerts, which can be escalated to cases in TheHive. We can view alerts inside of the SOC (Security Onion Console) Alerts interface:

Security Onion — Alerts

Security Onion — Alerts interface

Within the SOC Alerts interface, we can filter through different alerts generated by various data sources (like Zeek, Suricata, Playbook/Sigma, Osquery, Wazuh, or Strelka/YARA), drill down into specific alerts, acknowledge/dismiss alerts, or pivot from Alerts to the Hunt interface (to perform more free-form hunting) or another tool.

We can also push an event to TheHive by clicking the blue triangle — this will create a new case in TheHive, where we can track our ongoing investigation.

TheHive — Cases

Here we can see that we have the value from the rule.name field from Hunt populated as the case Title field, and the raw event comprises the Description field.

Analysts can associate tags, tasks, and more to a case. Once a case is opened, even more can be attached to the case, such as “observables”:

Observables in TheHive

Observables can serve as IOCs, or really anything to make note of during an investigation (observables can be marked as IOCs within the interface, as above). TheHive supports the notion of “Analyzers” and “Responders” (via Cortex), which can act on observables, enriching cases with additional information, and potentially initiating certain response actions in an ad-hoc fashion (which could be automated by a platform like n8n), but we‘ll look at this later.

In this instance, we are mirroring an investigation that started by hunting through events in Security Onion — an event was then pushed to a case in TheHive, and from there, an indicator was associated to our case, by adding it as an observable. We’ll discuss how we can automatically search Velociraptor clients for an observable whenever it is added, and even take response actions when found.

Getting Started

Security Onion/TheHive

If you want to install TheHive by itself, installation steps can be found here:

https://github.com/TheHive-Project/TheHiveDocs/blob/master/installation/install-guide.md

…or you can experiment using a TheHive training VM:

https://github.com/TheHive-Project/TheHiveDocs/blob/master/training-material.md

Velociraptor

If you’d like to try using Velociraptor on top of Security Onion, check out:

https://github.com/weslambert/securityonion-velociraptor

n8n

Getting the Work Flowing

n8n — TheHive Trigger Node

As you can see above, we’ve generated a webhook URL which we will provide to the webhook configuration section in TheHive’s application.conf file:

webhooks {
n8nWebhook {
url = "http://n8n/$webhook"
}
}

In Security Onion, the default file from /opt/so/saltstack/default/salt/thehive/etc/application.conf can be copied to /opt/so/saltstack/local/salt/thehive/etc/application.conf and modified from there.

Once we have the initial webhook integration set up, we will be monitoring observable creation events from TheHive in n8n, and can filter through them for certain observable types, like hashes or filenames.

n8n — Switch Node

In our example, we are determining which path to take, based on the type of observable (filename or hash). First, we will write our expression that will extract the value for the field for which we are looking (

By clicking the Valuefield in the configuration panel, we can set the following expression, which will access the the value for dataType:

Then we can set our routing rules, which are based on the value derived from the dataType field:

In short, if the observable type is hash, then it will follow path 0. If the observable type is filename, it will follow path 1.

Once we’ve determined what path we want to take, we will execute a hunt within Velociraptor to search for the observable across all of our endpoints.

n8n — Velociraptor Hash Hunt Node

The Velociraptor Hash Hunt Node is simply a Function Node that executes a Velociraptor API query, and returns the response, so that it can be processed. The Velociraptor API allows us to hook into the server execute arbitrary queries, and perform all sorts of actions like starting client collections, hunts, etc— pretty much anything that can be done within the UI itself

Prerequisites:

A test query can be invoked locally on the n8n node, or from n8n to ensure access to the Velociraptor API is working as intended using the native n8n Function Node.

For example:

python3 pyvelociraptor -c api.config.yaml “SELECT * from info()”

n8n — Workflow Execution

{
"nodes": [
{
"parameters": {
"events": [
"case_artifact_create"
]
},
"name": "TheHive Trigger",
"type": "n8n-nodes-base.theHiveTrigger",
"typeVersion": 1,
"position": [
400,
180
],
"webhookId": "b1dcd275-0940-4925-9798-df9e121edb95",
"alwaysOutputData": true,
"retryOnFail": false,
"notesInFlow": false,
"executeOnce": false,
"continueOnFail": true
},
{
"parameters": {
"command": "=python3 /home/wlambert/.local/bin/pyvelociraptor --config /home/wlambert/api_client.yaml 'SELECT hunt(description=\"TheHive Hash Hunt::{{$json[\"body\"][\"object\"][\"_parent\"]}}::{{$json[\"body\"][\"object\"][\"data\"]}}\", expires=(now() + 60) * 1000000, artifacts=[\"Generic.Forensic.LocalHashes.Query\"],spec=dict(`Generic.Forensic.LocalHashes.Query`=dict(Hashes=\"Hash\\n{{$json[\"body\"][\"object\"][\"data\"]}}\\n\"))) AS Hunt from scope()'"
},
"name": "Velociraptor - Hash Hunt",
"type": "n8n-nodes-base.executeCommand",
"typeVersion": 1,
"position": [
810,
80
]
},
{
"parameters": {
"dataType": "string",
"value1": "={{$json[\"body\"][\"object\"][\"dataType\"]}}",
"rules": {
"rules": [
{
"value2": "hash"
},
{
"value2": "filename",
"output": 1
}
]
}
},
"name": "Route by Observable Type",
"type": "n8n-nodes-base.switch",
"typeVersion": 1,
"position": [
580,
180
]
},
{
"parameters": {
"command": "=python3 /home/wlambert/.local/bin/pyvelociraptor --config /home/wlambert/api_client.yaml 'SELECT hunt(description=\"TheHive Filename Hunt::{{$json[\"body\"][\"object\"][\"_parent\"]}}::{{$json[\"body\"][\"object\"][\"data\"]}}\", expires=(now() + 60) * 1000000, artifacts=[\"Windows.Forensics.FilenameSearch\"],spec=dict(`Windows.Forensics.FilenameSearch`=dict(yaraRule=\"wide nocase:{{$json[\"body\"][\"object\"][\"data\"]}}\"))) AS Hunt from scope()'"
},
"name": "Velociraptor - Filename Hunt",
"type": "n8n-nodes-base.executeCommand",
"typeVersion": 1,
"position": [
810,
240
]
}
],
"connections": {
"TheHive Trigger": {
"main": [
[
{
"node": "Route by Observable Type",
"type": "main",
"index": 0
}
]
]
},
"Route by Observable Type": {
"main": [
[
{
"node": "Velociraptor - Hash Hunt",
"type": "main",
"index": 0
}
],
[
{
"node": "Velociraptor - Filename Hunt",
"type": "main",
"index": 0
}
]
]
}
}
}

Once a TheHive case is updated with a new case observable, an event will be sent via the HTTP trigger, and assuming the observable type matches our criteria, we’ll proceed with the rest of the workflow.

Testing our Work

This is a hit from a slightly modified community YARA rule (originally by Florian Roth) utilized by Strelka.

The rule looks like this:

Modified YARA rule for obfuscated batch files

This rule, in short, looks for a batch file under 200KB, with at least 3 occurrences of %%, frequently used during variable substitution in an attempt to obfuscate the intention of malicious batch scripts. Obviously, there could be some false positives here, but the rule provides an example of how activities like this could be detected, for example, in the case of Trickbot and it’s obfuscated launcher.bat, discussed here:

https://blog.huntresslabs.com/tried-and-true-hacker-technique-dos-obfuscation-400b57cd7dd)

I’ve transcribed the technique/Python script described in the video into a gist here so this can be easily emulated for detection purposes:

https://gist.github.com/weslambert/1c8a92bda81c8c7bf9ae054d26a561e0

In Security Onion, Zeek extracts files from the network traffic. Strelka scans these files and provides file-oriented detection and metadata analysis through the use of various scanners (including YARA), as well as hash computation (for MD5, SHA1/256, and ssdeep). This is very useful, especially if traditional NIDS (network intrusion detection system) alerts are not generated for this traffic, and endpoint telemetry is not available, or does not indicated anything is amiss; Strelka and YARA help to provide another layer of detection in this manner.

If we drill down into the alert, we can get more details:

File hashes and other details

From here, we can see the the analyzed file type, which scanners were invoked by Strelka, as well as different hashes for the file associated with the YARA rule.

Batch tokens and keywords

We can see from the above image, that there are numerous variables defined in the batch script, and they all seem to be random in nature (seemingly obfuscating the intended function of the script, as the name of the YARA rule suggests). This certainly does not appear to be normal.

We could go further, and try to inspect the file if we wish— this is made easy by the fact that Zeek’s extracted files can be found in Strelka’s processed directory on a Security Onion sensor. We could grab the file and process it using NetworkMiner or some other forensics tool(s) included with the Security Onion analyst VM to get a better understanding of what is going on under the hood. For now, we’ll continue documenting our examination of events and attempt to get more of a host-level view to see which hosts may be affected.

From here, we can say that we have seen this file traverse the network, and that there may be attacker tomfoolery at play. We’d like to determine if this file is present on any hosts in our environment, and if so, how many — so we march on.

Let’s push this event to TheHive:

TheHive case created from Security Onion

Next, we’ll add the MD5 hash as an observable to the case in TheHive:

TheHive observable creation

We’ll click “Create observable(s)”, and our automatic hash hunt and remediation should be initiated. Again, this hunt could occur over thousands of endpoints! Being able to quickly hone in on affected hosts is critical — this is one of the greatest strengths of Velociraptor.

When the n8n workflow is executed, you will notice a green icon appear for each node, indicating it has successfully completed with results:

n8n workflow

After the workflow has been executed, we can check the results of the workflow/hunt execution in Velociraptor:

Hash hunt started from TheHive in Velociraptor

In Velociraptor, a hunt has been created, and is now active. The hunt can be summarized as a collection of tasks for a targeted group of clients to perform. Clients will enroll in the hunt as they come online, and will return the results to the server.

For this hunt, we targeted all clients, and told the clients we would like to collect the Generic.Forensics.LocalHashes.Query artifact. In Velociraptor, artifacts are really just a set of VQL queries about a host’s state, or certain files or configuration found on a host. This artifact searches the local client hash database for the specified hash (or hashes). Keep in mind, you’ll want to have a regularly updated database of these hashes by scheduling the Generic.Forensic.LocalHashes.Glob artifact to run at a regular interval to pick up hashes for new files.

Alternatively, we could assign the Windows.Forensics.LocalHashes.Usn artifact to the client monitoring table, and have the hash database automatically updated as the USN journal is updated. However, depending on when you enable the USN artifact, if the USN journal no longer include a reference to file creation or modification of a file, then the client’s local hash database may not contain the has you are searching for, and you may not get the results you are expecting. At this point, if the file did still reside on disk, you would need to have run the Glob artifact to generate a hash for the file and insert it into the database. In short, it may be best to run the glob artifact when you initially roll out a client, then ensure the Windows.Forensics.LocalHashes.Usn artifact is enabled thereafter.

The hunt description created from the provided workflow contains three components, separated by a delimiter of :: (two colons):

  1. The hunt name/type (Ex. TheHive Hash Hunt)
  2. The TheHive case ID (Ex. p_l_vXcBUvmttZAJVYkn )
  3. The observable value (Ex. 67d419cd42edc4b4754d9f3a5c191d86)

By creating the hunt description in this way, we can later parse these values using the split() function, and access the array index to facilitate other actions, like posting tags back to an associated TheHive case.

Viewing the hunt details, we can see that the hunt has returned a result for a single PC, office-pc!

Looking into the specific flow by clicking the hyperlinked F.C0O863JUSVPFG text, we see the hash query:

Hash query flow issued by Velociraptor

If we dig into the results (one row returned for the office-pc client), we can see that the hash resolves to a file named system-maintenance.bat:

Hit for Velociraptor hash query

From here, we could narrow in on this PC, collect more artifacts (even the file itself) and perform a more intensive, targeted investigation with Velociraptor. With Velociraptor, we can quickly scan the MFT, registry, or process memory for additional clues, and much more. The ability to augment existing artifacts and create new ones on the fly lends itself to nearly unlimited potential from a forensics and response perspective.

Velociraptor — Built-in Automation

For example, if we wanted to automatically quarantine a host (check out Matthew Green’s blog post on IPSEC and quarantine and the built-in Windows Quarantine artifact for Velociraptor) when a hash provided by the TheHive-triggered Hash Hunt was found on the machine, we could have Velociraptor do that natively, rather than building more complex workflows.

Note: Automatically quarantining a host may not always be the best course of action during an investigation, especially if knowledge of a potential threat is limited. Ideally, we would like to properly scope the incident before taking these actions, but in other cases this could be satisfactory (can depend on the organization, type of host, etc). We can always collect more data via additional Velociraptor artifacts (ex. process memory, network connections, etc), or other sources to gather additional context before making this type of decision💥

Continuing, we can schedule a server artifact to run within Velociraptor to monitor execution of the flows generated by the hash hunt, then call the Windows quarantine artifact for that host.

The following is an example of a server artifact look for completed flows with results (indicating an observable was found on the endpoint), filtering flows created by a hunt derived from a TheHive observable, then quarantining the host, and updating the TheHive case with a tag indicating the action taken:

name: Custom.Server.Automation.Quarantine
description: |
This artifact will do the following:

- Look for artifacts with successful completion and results with regard to `ArtifactRegex`
- Look for the above, in addition to Hunts with a description equating/similar to `HuntRegex`
- Quarantine relevant Windows hosts
- Update a TheHive case with a tag noting that the client was quarantined
author: Wes Lambert, @therealwlamberttype: SERVER_EVENTparameters:
- name: ArtifactRegex
default: "Hashes"
- name: HuntRegex
default: "TheHive"
- name: DisableSSLVerify
type: bool
default: true
sources:
- query: |
LET TheHiveKey <= server_metadata().TheHiveKey
LET TheHiveURL <= server_metadata().TheHiveURL
LET ClientTag = FQDN + ' - Quarantined by Velociraptor'
LET CombinedTags = array(a1=Tags,a2=ClientTag)
LET GetTags =
SELECT parse_json(data=Content).tags AS Tags FROM http_client(
headers=dict(`Content-Type`="application/json", `Authorization`=format(format="Bearer %v", args=[TheHiveKey])),
disable_ssl_security=DisableSSLVerify,
method="GET",
url=format(format="%v/api/case/%v", args=[TheHiveURL,TheHiveCaseID]))
LET FinalTags = SELECT Tags, if(condition=Tags=NULL, then=ClientTag, else=CombinedTags) AS SendTags FROM GetTags
LET PostTags =
SELECT * FROM http_client(
data=serialize(item=dict(tags=FinalTags.SendTags[0]), format="json"),
headers=dict(`Content-Type`="application/json", `Authorization`=format(format="Bearer %v", args=[TheHiveKey])),
disable_ssl_security=DisableSSLVerify,
method="PATCH",
url=format(format="%v/api/case/%v", args=[TheHiveURL,TheHiveCaseID]))
LET FlowInfo = SELECT Flow.client_id AS ClientID, client_info(client_id=ClientId).os_info.fqdn AS FQDN, Flow.request.creator AS FlowCreator, Flow FROM watch_monitoring(artifact="System.Flow.Completion") WHERE Flow.artifacts_with_results =~ ArtifactRegex
LET StartQuarantine =
SELECT ClientID,
{SELECT hunt_description from hunts(hunt_id=FlowCreator)} AS HuntDescription,
{SELECT split(string=hunt_description, sep="::")[1] from hunts(hunt_id=FlowCreator)} AS TheHiveCaseID,
{SELECT collect_client(client_id=ClientID, artifacts=["Windows.Remediation.Quarantine"], spec=dict(`Windows.Remediation.Quarantine`=dict())) FROM scope() } AS Quarantine,
FQDN
FROM FlowInfo WHERE HuntDescription =~ HuntRegex
SELECT * FROM foreach (
row=StartQuarantine,
query={ SELECT ClientID, Quarantine, {SELECT * FROM PostTags} AS TheHiveUpdate FROM scope() }
)

🚫 👮 TEST AT YOUR OWN RISK — Please ensure the artifact works as intended before implementing in a production environment — also consider adjusting the artifact to work using labels, as to avoid quarantining hosts that provide critical production services, etc.

After creating our custom server artifact, we can assign it to the server monitoring table:

Automated quarantine artifact

Once assigned, whenever results are derived from Velociraptor hunts from TheHive, relevant hosts should be quarantined, and then the associated TheHive case updated.

For example, if we added the aforementioned hash (67d419cd42edc4b4754d9f3a5c191d86)to the observables for our case, a hunt and the resultant flows will be instantiated:

Quarantine started by query result and server artifact

..and the TheHive case will be updated with an appropriate tag:

TheHive case tag(s)

Additional remediation/response automation opportunities might include uninstalling persistence mechanisms, like scheduled tasks. This could be achieved through the use of the Windows.Remediation.ScheduledTasks artifact, custom artifacts utilizing Powershell commands or scripts.

As warned previously, ensure that this type of action is thoroughly tested, and only used in conjunction with high-fidelity information. 💥

Velociraptor — Additional Context and Correlation

See the following for an example of how to configure this for native Elasticsearch or Logstash:

Or if you’d like to simply write the flows to disk and have something like Filebeat (can send to ES/LS, but also Redis, Kafka), or another log shipper pick up the flow results, you can modify the Elastic.Flows.Upload artifact to do so by removing the elastic_upload query portion and replacing it a simple SELECT * from documents

Query to upload flow results to Elasticsearch/Logstash

to

Query to only write flow results to disk

If writing locally, you’ll want to make sure you are rotating/pruning the resultant files to avoid filling up the disk.

Once Elastic forwarding is enabled, the overall flow would look like the following:

Data flow summary

After utilizing a custom Logstash input and Elastic Ingest parsing within Security Onion, we can see results like the following inside of the Hunt interface:

Hunt record for Velociraptor hunt result

From here, we could correlate Velociraptor hunt results with other data, either by searching on the hash.md5 value, the client.ip value, or maybe other fields not included in the image above.

We could also setup quick actions for Hunt to allow us to quickly pivot back over to Velociraptor clients and continue our investigation.

We can do this by copying

/opt/so/saltstack/default/salt/soc/files/soc/hunt.actions.json

to

/opt/so/saltstack/local/salt/soc/files/soc/

and modifying it to include an action for Velociraptor:

{
“name”: “Velociraptor”,
“description”: “VR Client Lookup”,
“icon”: “fa-external-link-alt”,
“target”: “_blank”,
“links”: [
https://192.168.6.175:8889/app/index.html?#/collected/{value}/{:flow.id}"
]
}

After that, we just need to restart SOC with so-soc-restart so it will pick up the changes. Now, once we have a result inside of Hunt that contains a Velociraptor client ID, we can click the value, then choose the Velociraptor action:

Pivot from Velociraptor client ID

…another browser tab is generated and we can now see the results back in Velociraptor!

These pivots allow us to easily go back and forth between our different datasets and look at endpoints in more detail. We could also perform the same configuration change for alerts.actions.json to allow for the same actions in the Alerts interface in Security Onion.

Alternate approaches

  • Generate hunts via webhook/n8n as described above, but do so from alert artifacts vs case observables (we don’t use TheHive alerts in Security Onion, but this could be from a standalone TheHive instance, or if you would still like to generate alerts using Elastalert, or another feeder). If an alert is generated, and the artifact type matches the criteria set by n8n, then schedule a hunt in Velociraptor.

If you have many alerts, this approach could prove to be more troublesome than beneficial if not well-tuned or filtered. 💥

  • Use Logstash’s HTTP output from Security Onion to send specific cloned alerts/events to the HTTP trigger in n8n, then filter them for items of interest and create a case in TheHive based on certain criteria, having Velociraptor automatically hunt for the extracted indicators.

If you have many alerts, this approach could prove to be more troublesome than beneficial if not well-tuned or filtered. 💥

  • Automatically trigger Cortex analyzers via n8n, retrieve the results, then initiating a Velociraptor hunt, or other actions based off of that.

Again, guardrails should be in place. 😅

Conclusion

In summary, we have alert, network, host, and file data from Security Onion, and can quickly and easily pivot from that data over to TheHive, immediately begin hunting for malicious indicators across our entire fleet of thousands of endpoints automatically with n8n and Velociraptor, scope and contain affected endpoints (and perform additional forensic examination), then have the results (along with the results from other flows) ingested back into Security Onion, enriching the existing data, providing even greater overall context. By leveraging these free and open tools, we can take our enterprise security monitoring and response capability to the next level!

If you have any comments or questions about this article, please feel free to respond here, or tweet them to @therealwlambert, and throw a shout-out to @securityonion, @theHive_Project, @velocidex, @n8n_io!

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store