- Updated: March 21, 2026
- 10 min read
Managing DAST Findings in the OpenClaw Full‑Stack Template: Automated Workflows, Triage, and Remediation
You can fully automate the ingestion, prioritization, and remediation of DAST findings in the OpenClaw Full‑Stack Template by integrating scans into your CI/CD pipeline, using risk‑scoring rules, and syncing tickets with your issue tracker.
1. Introduction
Dynamic Application Security Testing (DAST) is essential for modern, cloud‑native applications because it discovers runtime vulnerabilities that static analysis often misses. When DAST is baked into the delivery pipeline, security becomes a shared responsibility rather than an after‑the‑fact checkpoint.
The OpenClaw Full‑Stack Template provides a ready‑made CI/CD scaffold, pre‑configured Docker images, and a set of reusable workflow components that let you spin up a secure development environment in minutes. By extending this template with automated DAST handling, teams can achieve continuous security without slowing down releases.
In this guide we’ll walk through:
- Running DAST scans in CI/CD.
- Collecting and normalizing results (JSON, SARIF, etc.).
- Scoring and triaging findings automatically.
- Remediation – both scripted and manual.
- Issue‑tracker integration (Jira, GitHub, GitLab).
- Practical scripts and pipeline snippets.
2. Ingesting DAST Findings
2.1 Running DAST scans in CI/CD
Most CI platforms support container‑based execution, making it trivial to run a DAST tool such as OWASP ZAP or Nikto as a job step. Below is a GitHub Actions example that launches ZAP in daemon mode, crawls the target, and exports a SARIF report.
name: DAST Scan
on:
push:
branches: [ main ]
jobs:
zap-scan:
runs-on: ubuntu-latest
services:
web:
image: myapp:latest
ports: [ "8080:8080" ]
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Start ZAP daemon
run: |
docker run -d --name zap \
-p 8090:8090 \
-e ZAP_PORT=8090 \
owasp/zap2docker-stable zap.sh -daemon -port 8090 -host 0.0.0.0
- name: Wait for app
run: |
until curl -s http://localhost:8080/healthz; do sleep 5; done
- name: Run active scan
run: |
docker exec zap zap-cli -p 8090 -host 0.0.0.0 \
quick-scan -r http://host.docker.internal:8080 \
-f sarif -o zap-report.sarif
- name: Upload SARIF
uses: github/codeql-action/upload-sarif@v2
with:
sarif_file: zap-report.sarif
2.2 Collecting results (JSON, SARIF, etc.)
OpenClaw’s Workflow automation studio can parse SARIF, JSON, or XML payloads and push them into a unified security_findings table in the built‑in PostgreSQL instance. The following Bash snippet demonstrates how to extract key fields from a SARIF file and insert them via psql:
#!/usr/bin/env bash
SARIF=$1
DB_URL="postgresql://ubos:password@localhost:5432/ubosdb"
jq -r '.runs[].results[] | [
.ruleId,
.level,
.message.text,
.locations[0].physicalLocation.artifactLocation.uri,
.locations[0].physicalLocation.region.startLine
] | @csv' "$SARIF" | while IFS=, read -r rule level msg uri line; do
psql "$DB_URL" -c \\
"INSERT INTO security_findings (rule_id, severity, description, file_path, line_number, status)
VALUES ('$rule', '$level', '$msg', '$uri', $line, 'new');"
done
The same approach works for JSON output from OpenAI ChatGPT integration if you prefer a language‑model‑driven scanner.
3. Prioritizing Findings
3.1 Risk scoring methodology
A robust scoring model balances CVSS base score, exploitability, and business impact. The following table shows a simple weighted formula that fits well into the OpenClaw security_findings schema:
| Factor | Weight | Source |
|---|---|---|
| CVSS Base | 0.5 | SARIF rule metadata |
| Exploit Availability | 0.2 | NIST NVD feed (via API) |
| Asset Criticality | 0.2 | Internal CMDB tag |
| Historical Remediation Time | 0.1 | Ticketing system SLA |
The final risk_score (0‑100) is stored alongside each finding, enabling downstream automation.
3.2 Automated triage rules
OpenClaw’s AI marketing agents framework can be repurposed for security triage. A simple Python rule engine evaluates the score and assigns a priority tag:
import psycopg2
def triage_finding(finding):
score = finding['risk_score']
if score >= 80:
return 'critical'
elif score >= 60:
return 'high'
elif score >= 40:
return 'medium'
else:
return 'low'
conn = psycopg2.connect(dsn="dbname=ubosdb user=ubos password=secret")
cur = conn.cursor()
cur.execute("SELECT id, risk_score FROM security_findings WHERE status='new'")
for fid, rs in cur.fetchall():
priority = triage_finding({'risk_score': rs})
cur.execute("UPDATE security_findings SET priority=%s WHERE id=%s", (priority, fid))
conn.commit()
Running this script as a nightly job ensures every new finding receives an appropriate priority before developers see it.
4. Acting on Findings
4.1 Automated remediation scripts
For low‑complexity issues (e.g., missing security headers, default credentials), you can generate a pull request automatically. The following Bash‑Python hybrid uses the Web app editor on UBOS to modify source files and push a branch.
#!/usr/bin/env bash
FINDING_ID=$1
REPO_URL="git@github.com:myorg/myapp.git"
BRANCH="remediate-$FINDING_ID"
# Clone repo
git clone "$REPO_URL" repo
cd repo
git checkout -b "$BRANCH"
# Simple remediation: add X‑Content‑Type‑Options header
python3 - <<PY
import pathlib, sys
file = pathlib.Path('src/main/resources/static/index.html')
content = file.read_text()
if 'X-Content-Type-Options' not in content:
content = content.replace('', '')
file.write_text(content)
PY
git add .
git commit -m "Automated remediation for finding $FINDING_ID"
git push origin "$BRANCH"
# Open PR via GitHub API (requires GH_TOKEN env)
curl -X POST -H "Authorization: token $GH_TOKEN" \
-d "{\"title\":\"Remediation for $FINDING_ID\",\"head\":\"$BRANCH\",\"base\":\"main\"}" \
https://api.github.com/repos/myorg/myapp/pulls
The script can be triggered from the Workflow automation studio whenever a finding’s priority is low or medium and the remediation pattern matches a known template.
4.2 Manual remediation workflow
High‑severity findings usually require human review. A recommended manual flow:
- Security analyst reviews the SARIF details and adds contextual notes.
- Analyst assigns the ticket to the owning development team.
- Developer reproduces the issue locally using the UBOS templates for quick start to spin up a matching environment.
- Fix is committed, CI runs a new DAST scan, and the ticket is automatically closed if the finding disappears.
OpenClaw’s built‑in partner program offers consulting services to help you fine‑tune this loop for enterprise scale.
5. Integration with Issue Trackers
5.1 Creating tickets (Jira, GitHub Issues, GitLab)
The following Python snippet demonstrates a generic ticket‑creation function that works with Jira, GitHub, or GitLab based on a simple configuration object.
import requests, json
CONFIG = {
"type": "github", # or "jira", "gitlab"
"github": {
"repo": "myorg/myapp",
"token": "GH_TOKEN"
},
"jira": {
"url": "https://jira.mycompany.com",
"user": "svc_user",
"api_token": "JIRA_TOKEN"
},
"gitlab": {
"project_id": 123,
"token": "GL_TOKEN"
}
}
def create_ticket(finding):
title = f"[{finding['priority'].upper()}] {finding['rule_id']} in {finding['file_path']}"
body = f"""**Severity:** {finding['severity']}
**Location:** {finding['file_path']}:{finding['line_number']}
**Description:** {finding['description']}
*Generated automatically from OpenClaw DAST pipeline*"""
if CONFIG['type'] == 'github':
url = f"https://api.github.com/repos/{CONFIG['github']['repo']}/issues"
headers = {"Authorization": f"token {CONFIG['github']['token']}"}
payload = {"title": title, "body": body, "labels": [finding['priority']]}
r = requests.post(url, headers=headers, json=payload)
elif CONFIG['type'] == 'jira':
url = f"{CONFIG['jira']['url']}/rest/api/2/issue"
auth = (CONFIG['jira']['user'], CONFIG['jira']['api_token'])
payload = {
"fields": {
"project": {"key": "SEC"},
"summary": title,
"description": body,
"issuetype": {"name": "Bug"},
"labels": [finding['priority']]
}
}
r = requests.post(url, auth=auth, json=payload)
elif CONFIG['type'] == 'gitlab':
url = f"https://gitlab.com/api/v4/projects/{CONFIG['gitlab']['project_id']}/issues"
headers = {"PRIVATE-TOKEN": CONFIG['gitlab']['token']}
payload = {"title": title, "description": body, "labels": finding['priority']}
r = requests.post(url, headers=headers, data=payload)
else:
raise ValueError("Unsupported tracker")
return r.json()
Hook this function into the post‑scan step of your CI pipeline. Each new finding spawns a ticket, and the ticket ID is stored back in the security_findings table for later correlation.
5.2 Syncing status and comments
Bidirectional sync ensures that when a developer marks a ticket as Done, the corresponding finding status flips to resolved. The following Bash script polls the issue tracker every 15 minutes and updates the DB accordingly.
#!/usr/bin/env bash
# Example for GitHub Issues
REPO="myorg/myapp"
TOKEN=$GH_TOKEN
while true; do
psql -c "SELECT id, external_ticket_id FROM security_findings WHERE status='open'" -t -A | while IFS='|' read -r fid ticket; do
state=$(curl -s -H "Authorization: token $TOKEN" \
"https://api.github.com/repos/$REPO/issues/$ticket" | jq -r .state)
if [[ "$state" == "closed" ]]; then
psql -c "UPDATE security_findings SET status='resolved' WHERE id=$fid"
fi
done
sleep 900
done
Similar snippets exist for Jira (using /rest/api/2/issue/{key}) and GitLab (/issues/{iid}). Embedding these in the Enterprise AI platform by UBOS guarantees high availability.
6. Practical Scripts
6.1 Bash/Python parser for DAST output
Combine the earlier jq extraction with a Python post‑processor that enriches each finding with CVSS data from the NIST NVD API.
import requests, json, psycopg2, subprocess
def fetch_cvss(cve):
url = f"https://services.nvd.nist.gov/rest/json/cve/1.0/{cve}"
r = requests.get(url)
data = r.json()
try:
return float(data['result']['CVE_Items'][0]['impact']['baseMetricV3']['cvssV3']['baseScore'])
except (KeyError, IndexError):
return 0.0
def ingest_sarif(sarif_path):
# Extract with jq (as shown earlier) and pipe to Python
raw = subprocess.check_output([
"jq", "-r",
".runs[].results[] | [.ruleId, .level, .message.text, .locations[0].physicalLocation.artifactLocation.uri, .locations[0].physicalLocation.region.startLine] | @csv",
sarif_path
]).decode()
conn = psycopg2.connect(dsn="postgresql://ubos:password@localhost:5432/ubosdb")
cur = conn.cursor()
for line in raw.strip().split("\n"):
rule, level, msg, uri, lineno = [x.strip('"') for x in line.split(",")]
cvss = fetch_cvss(rule) if rule.startswith("CVE-") else 0.0
risk = cvss * 10 # simple scaling
cur.execute("""INSERT INTO security_findings (rule_id, severity, description, file_path, line_number, cvss_score, risk_score, status)
VALUES (%s,%s,%s,%s,%s,%s,%s,'new')""",
(rule, level, msg, uri, lineno, cvss, risk))
conn.commit()
conn.close()
6.2 Ticket creation via API (re‑used from Section 5)
The create_ticket function above can be called directly after each INSERT in the script, ensuring immediate ticket generation.
6.3 CI/CD pipeline snippet (GitLab CI)
For teams on GitLab, the following .gitlab-ci.yml fragment runs the scan, parses results, and triggers the ticket workflow.
stages:
- test
- security
- notify
dast_scan:
stage: security
image: owasp/zap2docker-stable
services:
- name: myapp:latest
alias: web
script:
- zap.sh -daemon -port 8090 -host 0.0.0.0 &
- sleep 10
- zap-cli -p 8090 -host 0.0.0.0 quick-scan -r http://web:8080 -f sarif -o zap.sarif
- python3 scripts/ingest_sarif.py zap.sarif
artifacts:
paths:
- zap.sarif
only:
- main
notify_security:
stage: notify
image: python:3.11
script:
- pip install -r requirements.txt
- python3 scripts/sync_tickets.py
needs:
- job: dast_scan
artifacts: true
only:
- main
This pipeline demonstrates a fully automated loop: scan → ingest → triage → ticket → sync.
7. Best Practices & Tips
- Run scans on a fresh build. Deploy the exact artifact that will be released to production; this eliminates false positives caused by local dev configurations.
- Store scan artifacts. Keep SARIF/JSON files for audit trails and compliance (e.g., ISO 27001).
- Leverage the AI Email Marketing template to notify stakeholders when a critical finding appears.
- Continuous improvement loop. After each remediation, feed the fix back into the UBOS portfolio examples to enrich your knowledge base.
- Monitor trends. Subscribe to NVD RSS feeds and automatically bump the
exploitabilityweight when a new exploit is published. - Separate environments. Use distinct DB schemas for “dev”, “staging”, and “prod” findings to avoid cross‑contamination.
By treating security as code, you can apply the same version‑control discipline to your DAST configuration. The UBOS partner program offers pre‑built modules that integrate directly with popular CI tools.
8. Conclusion
Automating DAST within the OpenClaw Full‑Stack Template transforms security from a bottleneck into a continuous feedback channel. By ingesting SARIF/JSON output, applying a transparent risk‑scoring model, and syncing tickets through the Workflow automation studio, teams achieve:
- Faster detection of critical vulnerabilities.
- Reduced manual effort via scripted remediation.
- Clear audit trails for compliance.
- Scalable collaboration between security, DevOps, and developers.
Start by cloning the OpenClaw template, enable the DAST job, and let the built‑in automation handle the rest. For deeper customization—such as AI‑driven triage or multi‑cloud deployment—explore the Enterprise AI platform by UBOS or reach out via the About UBOS page.
Ready to make security a first‑class citizen in your CI/CD pipeline? Dive into the OpenClaw template today and experience a frictionless, automated DAST workflow.