GDPR Article 32 for Web Teams: Technical Measures Explained https://vulnify.app/blog/gdpr-article-32-for-web-teams-technical-measures-explained GDPR Article 32 is about the security of processing, not empty compliance language. Learn how Article 32 maps to real website and web application controls, where automated testing fits, and what evidence web teams should keep. Legal teams often tell engineering teams they need “Article 32 measures” on the website, but that phrase can feel vague if you live in code, infrastructure, and release workflows rather than privacy law. The result is often a familiar mess: a rushed ticket to “make the site GDPR compliant,” a last-minute scan before launch, and a weak paper trail that does not really explain what security controls exist, how they were chosen, or whether anyone tested them. That is not what Article 32 is asking for. GDPR Article 32 is about the security of processing. For web teams, that usually translates into practical questions about transport security, authentication, access control, configuration hardening, exposure reduction, resilience, vendor risk, and whether the organisation can show that its security measures are appropriate to the risk and tested over time. This guide explains Article 32 at a high level for websites and web applications. It maps the legal language to technical themes that engineering teams can actually work with, shows where automated testing can help, and keeps a clear boundary around what scanners and reports do not prove on their own. Disclaimer: This article is educational only. It is not legal advice, not regulatory advice, and not a substitute for the official GDPR text, guidance from a supervisory authority, or advice from qualified privacy counsel. Table of contents What Article 32 actually says Why controllers and processors both care How Article 32 translates to web application themes Where automated security testing fits Example security review pattern for web releases Web-facing Article 32 alignment checklist Common mistakes teams make FAQ Conclusion Related articles References What Article 32 actually says Article 32 GDPR is titled Security of processing . It requires controllers and processors to implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk, taking into account the state of the art, the costs of implementation, and the nature, scope, context, and purposes of the processing. The Article then gives familiar examples of what appropriate measures may include: pseudonymisation and encryption, the ability to ensure ongoing confidentiality, integrity, availability, and resilience, the ability to restore availability and access after an incident, and a process for regularly testing, assessing, and evaluating the effectiveness of the measures. That wording matters because it is risk-based, not checkbox-based. Article 32 does not give every organisation the same fixed technical recipe. It does not say every website must look identical. It says security measures should be appropriate to the risk. A brochure website collecting a simple contact form is not the same as a customer portal handling identity documents, support conversations, and billing records. A public marketing site is not the same as a healthcare booking platform or a multi-tenant SaaS dashboard. The obligations are framed around risk, context, and the real-world impact on people if the processing is compromised. There is also a useful relationship with Article 5. Article 5(1)(f) establishes the principle of integrity and confidentiality, while Article 5(2) establishes accountability, meaning the controller must be able to demonstrate compliance. For engineering teams, that creates a practical message: implement sensible safeguards, and keep enough evidence to explain what you did and why. At a website and web application level, this usually means that legal language about “security of processing” turns into engineering language about encrypted transport, least-privilege access, secure cookies, controlled changes, monitoring, incident recovery, vulnerability management, and vendor control. That is the bridge this article focuses on. Why controllers and processors both care A short orientation helps here. In GDPR terms, the controller decides the purposes and means of processing. The processor processes personal data on behalf of the controller. In real web stacks, that distinction often appears as a company that owns the website or application, plus one or more third parties that host infrastructure, run analytics, provide customer support tooling, process payments, deliver email, or support development and operations. Both sides care about Article 32 because the Article applies directly to controllers and processors. Article 28 also matters because it requires controllers to use processors that provide sufficient guarantees to implement appropriate technical and organisational measures. In plain English, a controller cannot sensibly say “security is the vendor’s problem” and walk away. And a processor cannot act as if security is optional just because it does not own the customer relationship. For web teams, this becomes practical very quickly. Who controls production deployment? Who manages CDN settings? Who can alter scripts loaded in the browser? Who operates the hosting environment, logs, backups, and access management? Who sees user support data? Who approves third-party scripts or embeds? Who can reset passwords or access admin panels? Those are not abstract legal questions. They are architecture, ownership, and evidence questions. This is why a narrow “the privacy policy is updated, so we are done” mindset fails. Article 32 reaches into the way the website and application are actually built and operated. Contracts matter, but so do release controls, logging choices, authentication defaults, retention settings, and security testing practices. How Article 32 translates to web application themes Most engineering teams do not need a lawyer to repeat the text of Article 32. They need a usable map from the legal language to web-facing work. The cleanest way to do that is by operational theme. Transport security and browser delivery If a website or web application processes personal data, transport security is one of the first places people expect competence. TLS protects data in transit, HTTPS redirects reduce accidental exposure, and HSTS can help stop downgrade or insecure access patterns. If authentication, profile data, support messages, payment details, or admin sessions travel over insecure transport, it is very hard to argue that the security of processing is appropriate to the risk. Transport security is not the whole story, but it is foundational. It also shows how Article 32 should be understood. The law does not say “install exactly these three headers and you are compliant.” It expects measures appropriate to the risk. For a modern website, encrypted transport and sane redirect behavior are usually part of that baseline expectation. Authentication, session management, and access control Web application security fails in practice when too many people can do too much, for too long, with too little verification. Article 32 does not list session cookies, MFA, password reset abuse, or admin role design by name, but those are exactly the kinds of control decisions that affect confidentiality and integrity in live systems. Engineering teams should think about who can log in, how sessions are established, how privileged actions are protected, how stale sessions expire, how support and administrative access is constrained, and whether access decisions are logged. For applications that process user accounts, identity data, support records, transaction history, or internal notes, weak authentication and sloppy access control are not just technical debt. They are governance failures with privacy consequences. That also includes customer support tooling, staging access, CMS roles, API tokens, and forgotten administrative paths. Personal data often leaks through operational shortcuts rather than dramatic zero-days. Application and server configuration Article 32 also connects naturally to everyday hardening work. Secure headers, safe error handling, exposure reduction, sane default permissions, and careful handling of file uploads or debug modes all influence whether personal data is processed securely. A site that reveals internal errors, leaves administration paths exposed, misconfigures caching around authenticated pages, or permits overly broad cross-origin behavior creates avoidable privacy risk even if no breach has happened yet. This is one place where automated checks can be useful. Headers, redirect chains, certificate issues, exposed files, and some classes of risky behavior are visible from the public surface. That does not tell you everything about the application, but it does tell you whether the system is exposing preventable weaknesses in places outsiders can reach. Logging, monitoring, and retention discipline Good security needs logging and monitoring. Bad privacy practice often leaks into logging and monitoring. Article 32 work for web teams is not just about collecting more events. It is about collecting the right events, protecting them properly, restricting access, and avoiding unnecessary personal data sprawl. For example, security logs may need to record login attempts, role changes, failed access, configuration changes, or suspicious requests. At the same time, teams should avoid casually storing secrets, full card details, session tokens, or excessive raw personal data in logs. They should know how long logs are kept, who can search them, and how they are protected. Logging is valuable security evidence, but it must be controlled. This is a good place to remember the risk-based language again. The point is not “log everything forever.” The point is to support security and accountability without creating a second, unmanaged data exposure problem. Backups, resilience, and recovery Article 32 explicitly mentions availability and resilience, and the ability to restore availability and access to personal data in a timely manner after an incident. For web applications, that points toward backup strategy, restoration procedures, infrastructure resilience, incident handling, and operational readiness. A system that encrypts traffic but cannot restore service or recover data after a failure is not fully aligned with the spirit of Article 32. Likewise, a team that has backups but has never tested restoration may have less resilience than it thinks. The requirement is not only to possess backup files. It is to be able to restore availability and access in practice. For many teams, the correct level of depth here is simple: know what is backed up, how often, where it lives, who can access it, how long it is retained, and whether restoration has been tested. That is enough to keep the article practical without turning it into a disaster recovery manual. Vendor and subprocessor touchpoints Modern websites rarely operate alone. Third-party analytics, payment providers, chat tools, session replay tools, CDNs, embedded content, customer support systems, email vendors, and cloud infrastructure providers all create dependencies. Some are processors. Some are controllers in their own right for their own purposes. Some relationships are mixed and need careful analysis. For Article 32 purposes, the practical web question is this: what third parties can affect the secure processing of personal data through your website or application, and what controls exist around them? That may include contract terms, access limitations, script approval workflows, subprocessor review, vendor inventories, and change management for embedded services. This is one reason security and privacy work on websites should not be split into completely separate silos. Where automated security testing fits Article 32 explicitly mentions a process for regularly testing, assessing, and evaluating the effectiveness of technical and organisational measures. That makes automated security testing relevant, but it still needs careful framing. A scan is not the law. A report is not a regulatory exemption. A clean result does not prove that the whole processing operation is secure. But testing can absolutely support a stronger Article 32 story when it is used honestly and consistently. For a website or public-facing application, automated testing can help identify exposed weaknesses that matter to confidentiality, integrity, availability, and resilience. It can help teams catch missing headers, insecure redirects, exposed files, risky public endpoints, TLS issues, or misconfigurations that widen attack surface. It can also help create a dated record of what was tested, what was found, what was fixed, and what was retested. That is where a platform like Vulnify fits naturally. Public-surface testing can help teams build evidence around the testing and evaluation part of Article 32, especially when results are reviewed, findings are assigned, and fixes are retested. It is useful evidence. It is not a standalone legal conclusion. The honest boundary matters. Automated testing may support a claim that the team is taking web risk seriously and regularly evaluating exposed controls. It does not prove that every internal control is adequate, every vendor relationship is governed correctly, or every processing activity is secure. It helps answer part of the Article 32 question, not all of it. That is also why exported evidence matters. If a team uses reports and exports to preserve security findings, retest history, and remediation notes, it improves accountability and operational memory. But again, the right phrasing is that this supports documentation of security measures and their evaluation. It should not be sold as “our scanner makes you GDPR compliant.” Example security review pattern for web releases The simplest way to make Article 32 real for engineering teams is to attach it to the release process. Not every content change needs a legal meeting, but security-relevant changes to a personal-data handling website should be visible, reviewable, and testable. release = { "area": "customer-account", "change_type": "auth-and-profile-update", "handles_personal_data": true, "requires_security_review": true, "status": "blocked" } if release["requires_security_review"]: confirm_code_review() confirm_access_controls_checked() confirm_cookie_and_session_settings_checked() confirm_logging_changes_reviewed() run_public_surface_tests() review_findings_and_assign_owners() rerun_after_fixes() attach_results_to_release_record() release["status"] = "ready_for_approval" This is intentionally simple. The value is not the pseudocode itself. The value is the discipline behind it: identify changes that affect personal-data processing, review them against security expectations, test them, fix issues, retest, and preserve evidence. Web-facing Article 32 alignment checklist Document what personal data the website or application processes. You cannot choose measures appropriately if you do not understand the processing context. Map the high-risk user journeys. Focus on login, account recovery, profile management, forms, uploads, support interactions, and any customer portal or admin area. Keep transport security current. Use HTTPS consistently, review redirects, and monitor certificate hygiene. Review authentication and session controls. Protect administrative access, review password reset flows, and harden session handling. Reduce unnecessary exposure. Remove debug behavior, exposed files, stale endpoints, and avoidable public attack surface. Use logging carefully. Capture security-relevant activity, but do not casually dump secrets or excessive personal data into logs. Test resilience. Know how backups, restoration, failover, and incident response work for systems handling personal data. Track vendor and script dependencies. Know which third parties can access data or affect the browser experience. Test regularly and after meaningful change. Article 32 is about ongoing effectiveness, not one pre-launch snapshot. Keep evidence. Preserve scan results, tickets, approvals, retests, and notes that explain what was assessed and what changed. Use careful language. Say what controls exist and how they were evaluated. Do not claim a single tool or one scan proves full compliance. Common mistakes teams make The most common Article 32 mistake is reducing it to vague legal theatre. Teams say “we have encryption” and stop thinking. Or they buy a tool, run it once, and assume that testing has been handled. Or they focus on the privacy notice while leaving authentication, logging, and public exposure weak. Treating Article 32 as a one-time task. The Article is about ongoing measures and ongoing evaluation. Confusing privacy paperwork with security operations. Policies matter, but so do release gates, session controls, and tested restoration. Logging too much. Poorly controlled logs can become their own data protection problem. Ignoring third-party dependencies. Browser scripts, support tools, analytics platforms, and vendors can materially affect the security of processing. Overclaiming what testing proves. Scanners and pentests help, but they do not automatically settle the legal question of adequacy in context. Keeping weak evidence. If the organisation cannot show what was tested, fixed, or approved, accountability becomes much harder to defend. FAQ Does Article 32 apply only to large SaaS companies? No. Article 32 is not limited to large software firms. It applies where personal data is processed and the controller or processor must implement appropriate security measures. The right measures vary by risk, scale, and context, but the obligation is not reserved for large platforms. Is encryption enough for Article 32? No. Encryption is specifically mentioned in Article 32, but it is only one example of an appropriate measure. The Article also refers to confidentiality, integrity, availability, resilience, restoration after incidents, and regularly testing the effectiveness of measures. Does a vulnerability scan satisfy Article 32? No. A scan may support the testing and evaluation part of your security story, especially for public-facing assets, but it does not by itself prove that all technical and organisational measures are appropriate in context. Who is responsible, the client or the development agency? That depends on the relationship and the actual roles. In many projects the client is the controller and the agency or hosting provider acts as a processor for some activities. But roles can vary. The important point for this article is that controllers and processors both care about Article 32, and controllers should use processors that provide sufficient guarantees under Article 28. What about cookies and consent? Cookies and consent are related privacy topics, but they are not the whole of Article 32. Even if cookie consent is handled elsewhere, security of processing still matters for the personal data your website or application handles. Keep the boundaries clear so you do not mistake consent tooling for a complete security programme. What if we use US-based SaaS vendors? That can raise additional transfer and contract questions, but those are not the main focus of this article. Article 32 still matters because you must consider whether the technical and organisational measures around the processing are appropriate. Transfer questions should be reviewed separately with current legal guidance and counsel where needed. What kind of evidence should we keep? Keep the evidence that explains what measures exist and how they were evaluated. That may include architecture notes, access control reviews, ticket history, security findings, exported reports, retest results, backup test records, vendor reviews, and change approvals. Conclusion GDPR Article 32 becomes much easier to handle when teams stop reading it as abstract legal pressure and start reading it as an operational security standard for real processing systems. At website level, that usually means choosing measures appropriate to risk, operating them consistently, testing whether they work, and keeping enough evidence to explain those choices later. That is why Article 32 should not be reduced to “we turned on HTTPS” or “we passed a scan.” It is an ongoing discipline around confidentiality, integrity, availability, resilience, and accountability. Tools like Vulnify can support that discipline by helping teams evaluate public-surface exposures and preserve testing evidence, but the broader obligation still depends on the full processing context and the wider control environment. Related articles Web Security and Regulation What PCI DSS Expects for Web Applications Secure SDLC for Web Apps: Where Scanning Fits Cyber Security vs Compliance: What Matters Most Security Headers Explained: CSP, HSTS, X-Frame-Options How to Read a Vulnerability Scan Report References ICO material below reflects how one supervisory authority explains security expectations in English under the UK GDPR regime. If you operate mainly in the EU, read it alongside guidance from your national data protection authority and from the EDPB, and always prioritise the official GDPR text on EUR-Lex for binding wording. EUR-Lex, Regulation (EU) 2016/679 (General Data Protection Regulation) , especially Articles 5, 28, and 32. Sources accessed: 2026-03-28. EUR-Lex, Regulation (EU) 2016/679 PDF . Sources accessed: 2026-03-28. European Data Protection Board, Secure personal data . Sources accessed: 2026-03-28. European Data Protection Board, Guidelines 9/2022 on personal data breach notification under GDPR (secondary: incident response and restoring availability after a breach; useful context next to Article 32, not a definition of Article 32 measures themselves). Sources accessed: 2026-03-28. ICO, A guide to data security . Sources accessed: 2026-03-28. ICO, Data protection by design and by default . Sources accessed: 2026-03-28.