Custom Test Cases
Test Case Framework
Test cases must follow the framework to be properly picked up, executed and finally have their results collected and returned to the user.
Example Custom Audit
We will start with a full example custom test case, then explain each piece. This example performs a DNS query for the domain name specified in parameter lookup_name to the DNS server specified by parameter dns_server. The test passes if an A record record is resolved, and fails if an A record cannot be resolved.
from framework import *
@register_audit("dns_query_audit")
class DNSQueryAudit(AuditRunner):
parameters = [
Parameter("dns_server", "DNS Server IP", "8.8.8.8", "The IP address of the DNS server to query"),
Parameter("lookup_name", "Lookup Name", "google.com", "The domain name to resolve")
]
friendly_name = "DNS Resolution Audit"
description = "Performs a DNS query against a specific server to verify network reachability and resolution."
fail_condition = "The test will fail if the DNS server is unreachable or the name cannot be resolved."
relevant_mitigations = []
bw_audit_id = "DNS_QUERY_001"
estimated_time_in_minutes = "1"
def dns_a_record(self, dns_server, lookup_name):
import socket
import struct
header = struct.pack("!HHHHHH", 0x1234, 0x0100, 1, 0, 0, 0)
qname = b"".join(struct.pack("B", len(p)) + p.encode() for p in lookup_name.split('.')) + b"\x00"
packet = header + qname + struct.pack("!HH", 1, 1)
with socket.socket(socket.AF_INET, socket.SOCK_DGRAM) as s:
s.settimeout(5.0)
s.sendto(packet, (dns_server, 53))
data, _ = s.recvfrom(1024)
if not data or len(data) < 12:
return None
res_header = struct.unpack("!HHHHHH", data[:12])
res_flags = res_header[1]
res_ancount = res_header[3]
if (res_flags & 0xF) != 0:
return None
if res_ancount == 0:
return None
offset = 12 + len(qname) + 4
rdlength = struct.unpack("!H", data[offset+10 : offset+12])[0]
return ".".join(map(str, data[offset+12 : offset+12+rdlength]))
def run_audit(self):
dns_server = self.params["dns_server"]
lookup_name = self.params["lookup_name"]
self.log.debug(f'Starting DNS Query for {lookup_name} via {dns_server}')
response_ip = self.dns_a_record(dns_server, lookup_name)
self.log.debug(f"DNS Response: {response_ip}")
failed = response_ip == None
return {
"dns_server": dns_server,
"lookup_name": lookup_name,
"response": response_ip,
"pass_rules": {"vulnerable": failed},
}
def format_results(self, results):
title = f"DNS Query"
segments = [
Report.create_text_segment(f"Target Server: {results['dns_server']}"),
Report.create_text_segment(f"Query Name: {results['lookup_name']}"),
Report.create_text_segment(f"Response: {results['response']}")
]
return [{"title": title, "segments": segments}]Import the Framework
from framework import *Importing the framework provides the types necessary to create an audit class. This includes the registration decorator, the AuditRunner and Parameter base classes, and Reporting helpers.
Register an Audit
@register_audit("dns_query_audit")
class DNSQueryAudit(AuditRunner):Whenever a new test case is created, it must be registered with the class decorator ‘@register_audit(“new_audit_name”)’
Define Parameters
parameters = [
Parameter("dns_server", "DNS Server IP", "8.8.8.8", "The IP address of the DNS server to query"),
Parameter("lookup_name", "Lookup Name", "google.com", "The domain name to resolve")
]Parameters are how dynamic input is provided to an audit. You must provide a parameter ID, name, default value and description for each.
Fill Necessary Properties
friendly_name = "DNS Resolution Audit"
description = "Performs a DNS query against a specific server to verify network reachability and resolution."
fail_condition = "The test will fail if the DNS server is unreachable or the name cannot be resolved."
relevant_mitigations = []
bw_audit_id = "DNS_QUERY_001"
estimated_time_in_minutes = "1"Many properties such as friendly_name and description are intuitive to fill in.
relevant_mitigations is a property provided to support compliance tracking. If your test case evaluates an issue with an associated mitigation ID, you may provide that in this field.
bw_audit_id is an external ID field you may use to link audits to other systems.
estimated_time_in_minutes is a property provided to support test planning. The value may be displayed to the user when designing a test set.
Design Test Logic
def run_audit(self):
dns_server = self.params["dns_server"]
lookup_name = self.params["lookup_name"]
self.log.debug(f'Starting DNS Query for {lookup_name} via {dns_server}')
response_ip = self.dns_a_record(dns_server, lookup_name)
self.log.debug(f"DNS Response: {response_ip}")
failed = response_ip == None
return {
"dns_server": dns_server,
"lookup_name": lookup_name,
"response": response_ip,
"pass_rules": {"vulnerable": failed},
}Within the body of the run_audit method is where your test logic belongs. This method must return a dictionary with at least a pass_rules dictionary containing the vulnerable boolean property. You may include other context as well.
Output printed with self.log (methods debug, warn, error) will be available for download via VSEC after test execution has completed.
Format Test Results
def format_results(self, results):
title = f"DNS Query"
segments = [
Report.create_text_segment(f"Target Server: {results['dns_server']}"),
Report.create_text_segment(f"Query Name: {results['lookup_name']}"),
Report.create_text_segment(f"Response: {results['response']}")
]
return [{"title": title, "segments": segments}]Within the body of the format_results method you can access the results passed from run_audit. Here you must return a dictionary with the title and segments properties which will be used for PDF reporting. It is nice if the title indicates the test result and the segments should provide some explaination of the result or show any available evidence.
Running a Custom Test Case
Upload a custom test case to VSEC Test by clicking the Create Test Case button under the Custom Test Cases tab and be sure to fill out the relevant fields.
You can now create a Test Spec including your new custom test case and add its parameters, and finally run the test by creating a Test Run.