Updated May 2026: This article was rewritten and refreshed for accuracy and relevance.

Table of Contents

Cybersecurity professionals conducting security audit

A cybersecurity audit is a structured review of your organization's IT systems, policies, and controls to identify gaps before attackers find them. It's not a one-time event — it's a recurring process that tells you where you actually stand, not where you hope you stand.

This checklist covers the areas that matter most for small and mid-sized businesses. Work through it systematically. Not everything will apply to every organization, and you don't have to fix everything at once — but you do need to know what's there.

Before You Start: Scope and Asset Inventory

An audit without a defined scope produces incomplete results. Before reviewing controls, document what you're auditing: every device on the network, every cloud application in use, every system that stores or transmits sensitive data. This includes shadow IT — tools employees adopted without IT involvement, like personal Dropbox accounts used for work files or unapproved messaging apps.

Common gaps here: cloud services provisioned outside IT's visibility, personal devices used for work email without MDM enrollment, and aging servers or network appliances that never made it onto the official asset list.

Identity and Access Controls

Compromised credentials are the leading initial access vector in breaches. According to the FBI's 2024 IC3 Report, business email compromise alone — which almost always starts with stolen credentials — generated $2.77 billion in losses last year.

Check these specifically:

  • MFA enforcement: Is multi-factor authentication required (not just available) on email, VPN, RDP, and cloud applications? Microsoft 365 accounts without MFA enforced via Conditional Access are an active exposure. "We have MFA set up" is not the same as "MFA is required for every login."
  • Principle of least privilege: Do users have access only to what their role requires? A marketing coordinator shouldn't have read/write access to financial systems. Review access permissions across your directory — most organizations find significant over-provisioning when they actually look.
  • Privileged account controls: Admin accounts should not be used for day-to-day work. Separate admin accounts, used only when elevated access is needed, limit the blast radius if a credential is compromised.
  • Offboarding procedures: Are accounts deprovisioned immediately when employees leave? Shared accounts that former employees still have credentials to are a documented risk. A centralized identity platform like Microsoft Entra ID makes this auditable.
  • Password policies: Are password complexity requirements enforced at the directory level, not just documented in a policy nobody reads? Is there a mechanism to detect and reset passwords that appear in known breach databases?

Endpoint Security

  • EDR coverage: Is endpoint detection and response software deployed on every device — laptops, desktops, and servers? Traditional antivirus catches known malware signatures; EDR tools like CrowdStrike, SentinelOne, or Microsoft Defender for Endpoint detect behavioral anomalies that signature-based tools miss entirely.
  • Patch status: What's the patch lag across your endpoints? CISA's Known Exploited Vulnerabilities catalog lists CVEs that are actively being used in real attacks. If any of those are unpatched in your environment, you have a documented, exploitable gap. Patch management through an RMM platform automates this and gives you a compliance report.
  • Disk encryption: Are laptops encrypted with BitLocker (Windows) or FileVault (Mac)? A stolen laptop with unencrypted storage is a data breach. This is also a compliance requirement under HIPAA and several state privacy laws.
  • Mobile device management: Are company-owned and BYOD devices enrolled in MDM? Can you remotely wipe a lost device? MDM platforms like Microsoft Intune enforce encryption, require screen lock PINs, and allow remote wipe — none of which you have without enrollment.
  • DNS filtering: Is outbound DNS traffic filtered to block connections to known malicious domains? Tools like Cisco Umbrella or Cloudflare Gateway operate at the DNS layer and stop many threats before any payload reaches the endpoint.

Network Security

  • RDP exposure: Is Remote Desktop Protocol (port 3389) open to the internet? This is one of the most commonly exploited entry points in ransomware attacks. If RDP is needed for remote access, it should be behind a VPN or Zero Trust Network Access solution — never directly internet-facing.
  • Firewall rule review: When were your firewall rules last reviewed? Accumulated rules from departed employees, old projects, and legacy systems create unnecessary exposure. Rules that allow inbound traffic should be justified and documented.
  • Network segmentation: Are sensitive systems — servers containing financial data, HR records, or PHI — isolated from general user traffic? Flat networks mean a compromised employee laptop can directly reach your most sensitive systems. Segmentation limits lateral movement.
  • Wi-Fi security: Are guest networks isolated from the corporate network? Is WPA3 or at minimum WPA2-Enterprise in use? Is there a process for rotating Wi-Fi credentials when employees leave?
  • VPN and remote access: Is VPN software current and patched? Fortinet, Pulse Secure, and Citrix VPN vulnerabilities have all appeared on CISA's Known Exploited Vulnerabilities list. Unpatched VPN appliances are a favorite ransomware entry point.

Email Security

  • Anti-phishing controls: Does your email platform have anti-phishing and anti-spoofing filters enabled? Microsoft Defender for Office 365 and Google Workspace's Advanced Protection both provide link scanning, attachment sandboxing, and impersonation detection beyond basic spam filtering.
  • DMARC, DKIM, and SPF: Are your domain's email authentication records configured? DMARC enforces what happens when someone sends email claiming to be from your domain — without it, attackers can impersonate your organization to your clients and partners. Check your configuration at MXToolbox.
  • Phishing simulation: When did you last run a simulated phishing campaign? Platforms like KnowBe4 let you send realistic phishing simulations to your employees and track who clicks. The click rate tells you something policy documents can't — actual susceptibility under realistic conditions.

Backup and Recovery

  • Backup coverage: Are all critical systems and data being backed up? This includes cloud-hosted data — Microsoft 365 and Google Workspace do not provide point-in-time recovery by default. Third-party backup tools are required for recoverable email and file history.
  • Offline or immutable backups: Are your backups isolated from your primary network? Ransomware operators routinely locate and destroy backup systems before triggering encryption. Backups accessible through the same credentials or connected to the same network are not reliable recovery options.
  • Recovery testing: When did you last test a restore? A backup you've never tested is an untested hypothesis. Run a recovery drill at least annually — restore a specific file, a specific system, or a full environment depending on your RTO requirements.
  • RTO and RPO documentation: How long can your organization operate without each critical system? How much data loss is acceptable? These targets drive backup frequency and recovery architecture. Without defined RTOs and RPOs, you can't evaluate whether your current backup strategy is actually sufficient.

Vulnerability Management

  • Scanning cadence: Are you running vulnerability scans regularly — at minimum quarterly, monthly for higher-risk environments? Tools like Tenable Nessus or Qualys identify unpatched software, misconfigured services, and weak configurations across your environment.
  • Remediation tracking: Are scan findings tracked to closure? A vulnerability scan that produces a report nobody acts on has zero security value. There should be a defined process: findings are triaged by severity, assigned owners, and resolved within defined timelines.
  • Penetration testing: Has an external penetration test been conducted in the past 12–24 months? Vulnerability scanning identifies known issues; penetration testing evaluates whether those issues can be chained together to achieve meaningful compromise. Required for SOC 2 and many CMMC Level 2 implementations.

Incident Response Readiness

  • Written IR plan: Is there a documented incident response plan that specifies who does what when a breach occurs? This means named roles, contact lists, defined escalation paths, and specific procedures for the most likely incident types — ransomware, data exfiltration, credential compromise, and business email compromise each warrant their own playbook.
  • Tabletop exercises: Has the plan been tested? A tabletop exercise — walking through a simulated incident scenario with the relevant people in the room — surfaces gaps and confusion in the plan before a real incident does. Annual exercises are the minimum; twice yearly for organizations in high-risk industries.
  • Legal and regulatory notification requirements: Does your team know the notification timelines? HIPAA requires breach notification to HHS and affected individuals within 60 days. Many state privacy laws have shorter windows. Not knowing these timelines before an incident adds regulatory exposure on top of an already bad situation.

Compliance and Policy Review

  • Applicable frameworks: Which regulatory frameworks apply to your organization — HIPAA, CMMC, SOC 2, PCI DSS, state privacy laws? Each has specific technical and administrative control requirements. An audit should map your current controls against those requirements explicitly, not generally.
  • Policy documentation: Are your security policies documented, current, and actually enforced? An acceptable use policy written in 2018 that doesn't address cloud applications, remote work, or AI tools isn't governing employee behavior. Policies should be reviewed annually and updated when the threat landscape or business operations change materially.
  • Audit logs: Are you retaining logs from critical systems — authentication events, privileged account activity, firewall traffic, endpoint activity — for a sufficient period? HIPAA requires audit logs for six years. SOC 2 typically requires 12 months minimum. Log retention is also essential for forensic investigation after an incident.

Third-Party and Vendor Risk

  • Vendor access review: Which third-party vendors have access to your systems or data? Each represents a potential entry point. Vendor access should be scoped to the minimum required, governed by a signed agreement, and reviewed periodically.
  • Business associate agreements: If you're subject to HIPAA and any vendor touches PHI, a signed BAA is required. Missing BAAs are a compliance violation regardless of whether a breach has occurred.
  • Supply chain risk: Do critical vendors have their own documented security programs? The SolarWinds attack compromised thousands of organizations through a trusted software update mechanism. Asking vendors for SOC 2 reports or security questionnaires isn't paranoia — it's reasonable due diligence.

What to Do with the Findings

An audit produces a list of gaps. That list needs to be triaged — not everything requires immediate action, but everything requires a decision. Rank findings by the combination of likelihood and impact: a missing MFA enforcement on your Microsoft 365 environment is higher priority than an outdated server that isn't internet-facing.

Assign each finding an owner and a target remediation date. Build a remediation backlog that leadership can review. The audit is only useful if it drives action — a completed checklist that produced no changes accomplished nothing. For organizations whose audit surfaces incident response gaps, the DOJ framework for incident response and law enforcement reporting is a practical starting point for building that plan.

For organizations that don't have the internal capacity to run this process or act on findings, working with an external partner for the initial assessment is often more efficient than attempting it in-house. An outside perspective also surfaces blind spots that internal teams miss precisely because they're too familiar with the environment.

Reach out to Stratify IT to schedule a security assessment — we'll work through this checklist against your actual environment and build a prioritized remediation plan with realistic timelines.

Learn more about our cybersecurity services to see the full range of what we offer.

Stratify IT — cybersecurity built around your business, not a template.

For more on protecting your organization, explore our cybersecurity services.

Frequently Asked Questions

For most small and mid-sized businesses, once a year is the minimum, not the goal. A lot can change in 12 months — new employees, new SaaS tools, a cloud migration, a vendor relationship that expanded access. Quarterly spot-checks on the highest-risk areas (identity, access, patching) make more sense than treating the annual audit as a complete picture. After a significant infrastructure change or an incident, audit that scope immediately regardless of timing.

An audit reviews what controls you have in place — policies, configurations, access rules, documentation. A penetration test actively tries to exploit weaknesses to see what an attacker could actually reach. They answer different questions. An audit tells you if MFA is enforced; a pen test tells you whether someone can bypass it. Most SMBs benefit from an audit first to clean up obvious gaps, then a pen test to validate whether the fixes actually hold under pressure.

Internal audits are useful and worth doing regularly, but they have a real blind spot — your team is too close to the environment. They built the systems, they know the workarounds, and they're less likely to flag things that feel normal to them. A third-party audit brings outside pattern recognition and catches assumptions your internal team stopped questioning. For compliance-driven audits (SOC 2, HIPAA, PCI), independence isn't optional — a third party is required.

Not necessarily. Shutting down a tool 40 people are actively using without a replacement plan tends to create workarounds that are harder to track than the original problem. The better approach is to assess the risk first — what data is in it, who has access, does it connect to anything internal — and then either bring it under IT management or migrate off it with a clear timeline. The goal is visibility and control, not a crackdown that drives behavior further underground.

It depends on your industry and what kind of data is at risk. Under HIPAA, discovering a misconfiguration that exposed protected health information may trigger breach notification requirements even if there's no confirmed attacker access. PCI DSS has its own disclosure rules. Outside regulated industries, there's no universal federal breach disclosure law for internal vulnerabilities — but several states have notification requirements that kick in once customer data is confirmed compromised. Document what you found, when you found it, and what you did. That record matters.

Risk-based triage, not effort-based. A finding that represents an active exposure on a system containing customer data outranks a medium-risk configuration issue on an internal test server, even if the second one is faster to fix. Start with anything that gives an unauthenticated attacker a path in — exposed RDP, accounts without MFA, unpatched externally-facing systems. Move from there to privilege escalation risks and data exposure, then work down. Build a remediation plan with owners and deadlines, not just a list of findings.

The mechanics differ, but the questions are the same: who has access, what's exposed, and what's logging activity. In AWS, start with IAM — overly permissive roles and unused access keys are common problems. In Azure, Conditional Access policies and Entra ID role assignments deserve close attention. Both platforms have native tools (AWS Security Hub, Microsoft Secure Score) that give you a baseline. The biggest mistake companies make is assuming the cloud provider's default configuration is a secure configuration. It isn't.

For most of the technical audit work — reviewing configurations, pulling logs, checking access policies — it doesn't matter whether employees know. That data doesn't change based on awareness. For anything involving behavioral observation or phishing simulation testing, there are arguments on both sides, but unannounced tests tend to produce more actionable data. The one group that should always know is IT leadership and the relevant system owners, so they can provide context and documentation without feeling blindsided by the findings.

Sharad Suthar

Sharad has a proven track record of delivering successful IT projects underpinned by creative problem-solving and strategic thinking. He brings an extraordinary combination of in-depth technical knowledge, problem-solving skills, and dedication to client satisfaction that enables him and his team at Stratify IT to deliver optimal IT solutions tailored to the specific needs of each organization, from large corporates to small businesses. His impeccable attention to detail and accuracy ensure that his clients get the best possible results.