Help

Running Scans Help

Practical runbook for starting scans, selecting depth, and acting on first-pass results.

Who This Topic Is For

Users who need procedural support while running scans and interpreting first outputs.

Before You Start

Use this checklist to make sure the workflow guidance applies cleanly to your current task.

  • Target URL is available and authorized for testing.
  • You know whether this is fast validation or release-level review.
  • You know whether this should be a one-off check or recurring workflow.
  • You have the right account context if saved workflow continuity is required.

Step-By-Step Guidance

Follow these steps in order for a reliable and repeatable outcome.

  1. Choose the right entry point first.

    Use free tools for focused diagnostics, then move to dashboard workflows when you need saved history, stronger continuity, and repeatable reporting.

  2. Define scan objective before selecting depth.

    Decide if this run is baseline validation, pre-release confidence, or post-remediation verification. Depth should match objective and acceptable risk.

  3. Set depth intentionally.

    Quick is best for speed, Standard is usually the default, and deeper modes are for higher confidence and wider verification before critical launches.

  4. Triage findings in severity-first order.

    Start with critical and high-impact findings, assign ownership, and define target closure windows before broad cleanup work.

  5. Rerun as a required closure step.

    After changes are deployed, rerun the relevant checks to confirm risk reduction. Treat unresolved rerun findings as open work, not completed work.

Operational Playbook

Use this long-form guidance to execute the workflow consistently across planning, implementation, and validation.

Define Scan Objectives Before Touching Configuration

High-quality scan execution starts with a clear objective. Before selecting depth or workflow, define whether the run is baseline visibility, pre-release confidence, post-remediation verification, or recurring governance review. Each objective demands different rigor, timelines, and acceptance criteria. Teams that skip this step often run shallow diagnostics for high-stakes decisions, then discover unresolved risk late in delivery. A simple objective statement keeps execution aligned: what are we trying to prove, by when, and for which audience. Once that is explicit, depth selection and rerun planning become straightforward. This also improves communication across engineering and leadership because everyone understands why a specific scan path was chosen. In mature operations, objective-first scanning reduces noise, clarifies priority, and improves the credibility of security decisions tied to releases.

Choose Workflow Entry Point Based On Required Continuity

Use focused public tools when you need quick category-specific answers, and use dashboard workflows when you need saved reports, historical comparison, ownership continuity, and repeatable execution. The important distinction is not speed alone, it is continuity of evidence and follow-through. If findings will be triaged, assigned, and revisited, account-backed workflows are usually the right path. If you only need a fast directional signal, a focused tool can be appropriate. Teams often lose time by switching between paths without intent, which fragments evidence and complicates decision-making. Standardize this decision at kickoff: are we gathering a quick signal or building a tracked remediation cycle. That one choice prevents downstream confusion and helps keep scan results useful for both technical execution and stakeholder reporting.

Operationalize Severity-First Triage And Ownership

A scan is only useful when triage converts output into action. Start by grouping findings by impact, then assign clear owners and target closure windows for critical and high-severity issues first. Avoid broad, unprioritized remediation queues, because they delay risk reduction and hide what matters most. Document whether each finding is actively being fixed, deferred with rationale, or pending additional validation. Keep this status visible so rerun interpretation remains grounded in real progress rather than assumptions. If your team supports multiple environments, ensure triage reflects production exposure and business impact, not just technical complexity. Consistent ownership discipline is the bridge between scanning and actual security improvement. Without it, even frequent scans can create activity without meaningful posture change.

Treat Reruns As Mandatory Evidence, Not Optional Cleanup

Remediation deployment is not proof of closure. Reruns are the evidence step that confirms whether risk was actually reduced in the target environment. Build reruns into your workflow definition from the start: who runs them, when, and what criteria marks success. Compare before-and-after outputs and capture differences explicitly so teams can verify progress with confidence. If findings persist, do not force closure status; use the new output to refine root-cause analysis and continue remediation. This discipline prevents stale assumptions and improves reporting accuracy for internal and external stakeholders. In release-focused environments, mandatory rerun policy is one of the fastest ways to improve trust in scan-driven decisions because every closure claim is backed by fresh observable results.

Create A Repeatable Scan Cadence Aligned To Delivery Risk

Comprehensive security posture is built through cadence, not isolated scan events. Define when scans should occur across your lifecycle: pre-release checkpoints, post-remediation verification windows, and recurring baseline monitoring for production assets. Match cadence intensity to change velocity and business risk. Fast-moving environments usually need tighter scan loops to keep findings current and actionable. Publish this cadence so engineering, product, and security teams operate from one expectation model. Include escalation triggers, such as repeated critical findings or cross-category patterns that require broader review. Over time, cadence alignment turns scanning into a stable operational control rather than a reactive task. The outcome is better predictability, fewer late surprises, and stronger confidence that release decisions are informed by current evidence.

Validation Checklist

Use this checklist to confirm the workflow was completed correctly.

  • Chosen depth matches release risk and confidence requirement.
  • Findings are triaged by impact before remediation work starts.
  • Owners and closure windows are defined for high-impact findings.
  • Post-fix rerun is scheduled or completed.
  • Result history shows measurable improvement after remediation.

Common Problems And Fixes

If something does not match expectation, check these common failure modes first.

Running only one scan before release

Use reruns after remediation and at key milestones so security posture decisions are evidence-backed.

Using quick diagnostics for full launch sign-off

Quick diagnostics are useful, but broader validation usually needs deeper scan depth and saved workflow review.

Choosing depth by habit instead of objective

Set objective first, then depth. This avoids shallow runs being used where release-grade confidence is required.

Treating findings as complete after patch deployment

Deployment is not validation. Rerun and compare findings to confirm true closure before marking the issue resolved.

Running Scans Help FAQs

Standard is usually the best default for production checks. Use Quick for baseline speed and deeper options when confidence needs are higher.

Next Recommended Action

Continue to the best next page based on where you are in your workflow.