Understanding Laws on confidentiality in a Data-Driven Society

# Understanding Laws on Confidentiality in a Data-Driven Society

The digital transformation of modern society has fundamentally altered how organisations collect, process, and store personal information. Every online transaction, social media interaction, and digital service subscription generates data trails that reveal intimate details about individuals’ lives, preferences, and behaviours. This unprecedented scale of data collection has necessitated robust legal frameworks to protect individual privacy and ensure organisational accountability. The intersection of technology and law has created a complex regulatory landscape where businesses must navigate stringent compliance requirements while leveraging data for innovation and competitive advantage. Understanding these confidentiality laws isn’t merely a legal obligation—it’s a business imperative that shapes customer trust, operational practices, and strategic decision-making in an increasingly interconnected world.

Foundations of data protection law: GDPR, DPA 2018, and international frameworks

The regulatory architecture governing data confidentiality has evolved significantly over the past decade, with the General Data Protection Regulation (GDPR) establishing the gold standard for privacy protection globally. Implemented in May 2018, the GDPR fundamentally transformed how organisations approach personal data handling, introducing comprehensive obligations that extend far beyond traditional data security measures. The regulation’s extraterritorial scope means that any organisation processing EU residents’ data—regardless of geographical location—must comply with its provisions, effectively creating a global privacy baseline that has influenced legislation worldwide.

What makes the GDPR particularly significant is its principles-based approach rather than prescriptive rules. This framework requires organisations to demonstrate accountability and embed privacy considerations into their operational DNA. The regulation recognises seven foundational principles that govern all data processing activities: lawfulness, fairness, and transparency; purpose limitation; data minimisation; accuracy; storage limitation; integrity and confidentiality; and accountability. Each principle interconnects to create a comprehensive protection framework that balances individual rights with legitimate business interests.

GDPR article 5 principles: lawfulness, fairness, and transparency in data processing

Article 5 of the GDPR establishes the cornerstone principles that underpin all data processing activities. The lawfulness requirement mandates that organisations must identify a valid legal basis before collecting or using personal data—whether consent, contractual necessity, legal obligation, vital interests, public task, or legitimate interests. This isn’t a box-ticking exercise; you must genuinely assess which basis applies to each processing activity and document your rationale. The fairness principle demands that data processing doesn’t adversely affect individuals in unexpected or deceptive ways, whilst transparency requires clear, accessible communication about how personal information will be used.

These principles create practical obligations that permeate every aspect of data handling. For instance, transparency manifests in privacy notices that must explain processing purposes in plain language, identify data recipients, specify retention periods, and inform individuals of their rights. Many organisations struggle with this requirement, producing lengthy legal documents that technically comply but fail to genuinely inform. Effective transparency means presenting information in layered formats, with concise summaries supported by detailed explanations available when needed. The ICO has issued guidance emphasising that privacy information should be “concise, transparent, intelligible and easily accessible,” using clear and plain language—particularly when addressing children.

UK data protection act 2018: Post-Brexit amendments and domestic provisions

The UK Data Protection Act 2018 (DPA 2018) works in tandem with the UK GDPR (the retained EU regulation as amended post-Brexit) to form Britain’s data protection framework. The DPA 2018 supplements the GDPR by addressing areas where member states had discretion, establishing provisions for law enforcement processing, intelligence services, and specific sector requirements. Following Brexit, the UK has maintained substantial alignment with EU standards whilst introducing targeted modifications that reflect domestic priorities and reduce perceived regulatory burdens on businesses.

Recent legislative developments, including the proposed Data Protection and Digital Information Bill, signal the UK government’s intention to diverge further from EU approaches. These amendments aim to reduce compliance costs for businesses—particularly regarding cookie consent, international data transfers, and documentation requirements—whilst maintaining high privacy standards. However, organisations operating across UK and EU markets must carefully monitor these developments, as significant divergence could jeopardise the adequacy decision that currently enables seamless data flows between jurisdictions. The challenge for multinational organisations is maintaining compliance frameworks that satisfy both regimes without duplicating systems unnecessarily or creating operational inefficiencies.

CCPA and global privacy regulations: comparative

analysis of territorial scope

While the GDPR often dominates discussions, it is far from the only influential privacy regime. The California Consumer Privacy Act (CCPA), strengthened by the CPRA amendments, has introduced a distinctly American model of data protection focused on consumer rights, transparency, and opt-out mechanisms rather than comprehensive processing principles. Its territorial scope, like the GDPR, extends beyond state borders: any business that meets certain revenue or data-processing thresholds and “does business” in California must comply, regardless of where it is established. This has effectively turned California into a global privacy regulator for companies with US-facing operations.

Comparing the GDPR and CCPA helps illustrate how confidentiality obligations can shift depending on jurisdiction. The GDPR frames individuals as “data subjects” with fundamental rights, while the CCPA describes them as “consumers” with enforceable economic and informational rights, such as the right to know, delete, and opt out of “sales” or “sharing” of personal information. Other frameworks—such as Brazil’s LGPD, South Africa’s POPIA, and emerging Indian legislation—borrow elements from both models, creating a patchwork of global privacy regulations that organisations must map and reconcile. For multinational businesses, building a harmonised standard that meets the strictest applicable law is often more efficient than trying to run multiple conflicting confidentiality regimes in parallel.

Information commissioner’s office (ICO) enforcement powers and penalty structures

In the UK, the Information Commissioner’s Office (ICO) is the supervisory authority responsible for enforcing the UK GDPR and DPA 2018. Its enforcement toolkit is broad: it can issue information notices, conduct audits, serve enforcement notices compelling remedial action, and impose substantial administrative fines. For the most serious infringements, the ICO can levy penalties of up to £17.5 million or 4% of the organisation’s global annual turnover—whichever is higher—mirroring the GDPR’s maximum thresholds. These headline figures are designed to ensure that confidentiality breaches are not written off as a minor cost of doing business.

In practice, the ICO takes a risk-based and proportionate approach, assessing factors such as the nature of the data, the number of individuals affected, the duration of the breach, and how the organisation responded once it became aware of the incident. Mitigating measures—like timely breach notification, clear cooperation with investigators, and demonstrable investment in technical and organisational safeguards—can significantly reduce the eventual fine. This means that your day-to-day governance decisions directly influence regulatory exposure: having robust policies on paper is not enough if they are not implemented and evidenced in practice.

Legal obligations for data controllers and processors in digital ecosystems

Modern data processing rarely happens within a single organisation’s four walls. Cloud hosting providers, SaaS platforms, payment gateways, marketing agencies, and analytics vendors all form part of a complex digital ecosystem through which personal data flows. The GDPR recognises this by distinguishing between “controllers” (those who determine the purposes and means of processing) and “processors” (those who process data on behalf of controllers). Each role carries distinct legal obligations, and misunderstanding where you sit in this chain can expose you to unexpected confidentiality risks.

Controllers bear primary responsibility for ensuring that processing is lawful, fair, and transparent, while processors must implement appropriate security measures and act only on documented instructions. In practice, many organisations play both roles simultaneously—for instance, acting as a controller for HR data while serving as a processor for B2B client data. Taking time to map your processing roles across different services is a crucial first step in understanding which legal duties apply and where contractual safeguards are needed.

Article 28 GDPR processor agreements: contractual safeguards and liability chains

Article 28 GDPR requires controllers to appoint processors only under written contracts that contain specific mandatory clauses. These “data processing agreements” (DPAs—not to be confused with the Data Protection Act) are the legal backbone of confidentiality in outsourced environments. At a minimum, they must set out the subject matter, duration, nature and purpose of the processing, the types of personal data and categories of data subjects, and the obligations and rights of the controller. They must also oblige the processor to implement appropriate security measures, ensure staff confidentiality, support the controller with data subject requests, and obtain prior authorisation before engaging sub-processors.

From a practical standpoint, you should treat processor contracts less like boilerplate and more like a shared risk-management tool. Do you know which sub-processors your cloud provider relies on, and where they are located? Have you agreed clear notification timelines for breaches or incidents? Are audit and inspection rights realistic—both technically and commercially—or merely theoretical? In a complex liability chain, a single weak contract can become the point of failure when something goes wrong. Investing time upfront to negotiate robust Article 28 terms, rather than blindly accepting standard templates, can prevent costly disputes later.

Data protection impact assessments (DPIAs): when systematic evaluation becomes mandatory

DPIAs are structured risk assessments designed to evaluate the impact of proposed processing activities on individuals’ rights and freedoms. Under Article 35 GDPR, conducting a DPIA is mandatory where processing is “likely to result in a high risk” to individuals—for example, when using new technologies, engaging in systematic monitoring of public areas, or processing large-scale special category data. Supervisory authorities, including the ICO, have published lists of processing operations that typically require a DPIA, such as behavioural profiling, large-scale CCTV, and certain AI-driven decision-making tools.

Think of a DPIA as the privacy equivalent of a building’s structural survey: it forces you to examine how data will flow, where confidentiality weaknesses might arise, and which safeguards you must implement before going live. A good DPIA is not a one-off tick-box exercise but a living document that should be revisited when systems change, new datasets are added, or emerging technologies introduce fresh risks. Involving multidisciplinary stakeholders—IT, legal, security, product, and even frontline staff—ensures that you capture real operational realities rather than theoretical assumptions.

Records of processing activities (RoPA): documentation requirements under article 30

Article 30 GDPR requires most organisations to maintain written Records of Processing Activities (RoPA), documenting what personal data they process, for what purposes, on what legal bases, where it is stored, and with whom it is shared. While smaller organisations with limited low-risk processing may benefit from narrow exemptions, in practice any business with digital operations will need some form of RoPA to demonstrate accountability. Regulators increasingly ask for these records during investigations to assess whether organisations understand their own processing landscape.

Creating a RoPA can feel like mapping an invisible city of data flows: you will discover shadow systems, legacy databases, and ad-hoc spreadsheets that never made it into earlier inventories. The exercise is worthwhile because it underpins almost every other confidentiality control—from setting retention periods to responding to DSARs. Many organisations now use dedicated governance, risk, and compliance (GRC) tools to maintain RoPA dynamically, integrating them with ticketing systems and asset inventories so that changes in infrastructure automatically trigger updates to processing records.

Data protection officers (DPOs): appointment criteria and independence guarantees

The GDPR requires the appointment of a Data Protection Officer in specific circumstances: where processing is carried out by a public authority, where core activities involve large-scale regular and systematic monitoring, or where core activities consist of large-scale processing of special category or criminal conviction data. Even where not strictly mandatory, many organisations choose to appoint a DPO—or at least a privacy lead—to centralise oversight and advice on confidentiality issues. The DPO’s role is advisory and supervisory rather than operational; they must be involved “in a timely manner” in all matters relating to personal data.

Independence is crucial. A DPO cannot be penalised for performing their duties, and they must not be placed in a position where they mark their own homework—for instance, serving simultaneously as head of IT or marketing where conflicts of interest are obvious. In practice, this means giving the DPO direct access to senior management, adequate resources, and the freedom to raise concerns without fear of retaliation. Whether you appoint an internal DPO or engage an external specialist, their effectiveness depends on whether the wider organisation is willing to act on their recommendations rather than treating them as a compliance ornament.

Confidentiality breaches and regulatory response mechanisms

Despite the best-laid policies and technical safeguards, confidentiality breaches remain a question of “when,” not “if.” Phishing attacks, misdirected emails, misconfigured cloud buckets, and compromised credentials are all common causes of personal data exposure. What differentiates resilient organisations is not an absence of incidents, but the speed and transparency with which they detect, contain, and learn from them. The GDPR codifies this expectation by imposing strict breach-notification duties and empowering regulators to scrutinise both the root cause and the remediation measures taken.

From a legal perspective, a personal data breach is broader than many assume. It doesn’t just cover exfiltration by malicious actors; any unauthorised access, alteration, loss, or destruction of personal data can qualify. Accidentally emailing a spreadsheet to the wrong recipient or leaving a laptop on a train may trigger the same obligations as a sophisticated cyber-attack. Building a robust incident response playbook—and rehearsing it—is therefore as much a compliance task as it is a security one.

Article 33 breach notification: 72-hour reporting timelines to supervisory authorities

Article 33 GDPR requires controllers to notify the relevant supervisory authority of a personal data breach “without undue delay and, where feasible, not later than 72 hours” after becoming aware of it, unless the breach is unlikely to result in a risk to individuals’ rights and freedoms. This 72-hour window can feel brutally short when you are in the middle of an unfolding incident, especially if systems are still compromised or forensic evidence is incomplete. However, the law recognises this by allowing initial notifications to be made with limited information, followed by updates as more details emerge.

In practice, you should not wait until every technical question has been answered before notifying regulators. A concise early report that sets out what you know, what you are doing, and when you expect further updates demonstrates good faith and can significantly influence the regulator’s assessment of your response. Parallel to the authority notification, Article 34 may require you to communicate directly with affected individuals where the breach is likely to result in a high risk to their rights—something that can be reputationally sensitive but vital to enable people to protect themselves (for example by resetting passwords or monitoring accounts).

British airways £20M and marriott £18.4M ICO fines: case study analysis

The ICO’s enforcement actions against British Airways and Marriott International are often cited as cautionary tales of what happens when confidentiality failures intersect with large-scale data processing. In the British Airways case, attackers diverted user traffic from the airline’s website to a fraudulent site, harvesting payment card and personal details from approximately 400,000 customers. The ICO’s investigation highlighted deficiencies in BA’s security controls, such as the absence of multi-factor authentication and inadequate network segmentation, which could have mitigated or prevented the intrusion.

Marriott’s case stemmed from a compromise of the Starwood Hotels reservation database—an incident that predated Marriott’s acquisition but nonetheless became its responsibility. The ICO focused heavily on due diligence and post-acquisition integration, concluding that Marriott had failed to adequately assess and secure the inherited systems. Together, these cases illustrate two critical lessons: regulators will look beyond the immediate breach vector to examine governance and technical posture over time, and corporate transactions do not erase confidentiality obligations. If you acquire or merge with another business, its legacy data practices become your problem.

Forensic investigation protocols: establishing root cause and remediation measures

Effective breach response hinges on sound forensic investigation. This involves more than hiring an external incident response firm after the fact; it requires you to design systems and logging practices in advance so that meaningful evidence exists when something goes wrong. Detailed audit trails, centralised log management, and time-synchronised systems all help investigators reconstruct what happened, which accounts were compromised, and which data sets were accessed or exfiltrated.

From a confidentiality-law perspective, the purpose of forensics is twofold. First, it allows you to provide regulators and affected individuals with accurate information rather than speculation. Second, it enables you to identify root causes and implement corrective measures—whether that means tightening access controls, rolling out multi-factor authentication, enhancing staff training, or redesigning network architecture. Treating every incident as an opportunity to strengthen your overall posture, rather than simply closing the immediate hole, is one of the most effective ways to reduce recurring breaches over time.

Technical and organisational measures for confidentiality assurance

Article 32 GDPR obliges controllers and processors to implement “appropriate technical and organisational measures” to ensure a level of security appropriate to the risk. This formulation is deliberately flexible: what is appropriate for a small local charity will differ from what is expected of a global bank. Yet certain building blocks recur across all mature confidentiality programmes—strong encryption, controlled access, secure development practices, and a culture of awareness among staff. The law does not prescribe specific technologies, but regulators increasingly benchmark organisations against industry norms.

A useful analogy is building a secure house. Locks on the doors (access controls), curtains on the windows (privacy by design in user interfaces), a sturdy safe for valuables (encryption), and an alarm system (monitoring and alerting) all play complementary roles. Neglecting any single layer makes the others less effective. Your challenge is to choose and integrate these measures in a way that makes sense for your risk profile, budget, and technical environment—then document those choices so you can demonstrate reasonableness to auditors and regulators.

Encryption standards: AES-256, TLS 1.3, and end-to-end encryption protocols

Encryption is one of the most powerful tools for protecting confidentiality, both at rest and in transit. Industry standards such as AES-256 for data at rest and TLS 1.3 for data in transit are now widely recognised as baseline requirements rather than advanced options. Proper key management—often overlooked—is just as important as the algorithm itself; storing encryption keys on the same server as the encrypted data is akin to locking your front door and taping the key next to the handle.

End-to-end encryption (E2EE) goes a step further by ensuring that only the communicating endpoints can read the content, with no decryption possible in transit or on intermediate servers. Messaging apps, telehealth platforms, and collaboration tools increasingly adopt E2EE to bolster confidentiality, though this can raise complex law-enforcement and compliance debates. When assessing encryption strategies, you should align them with your threat model and regulatory context: for highly sensitive data—such as health records or financial transactions—robust, well-implemented encryption can dramatically reduce the impact of a breach and may influence whether notification to individuals is required.

Pseudonymisation and anonymisation techniques under article 32 GDPR

The GDPR explicitly cites pseudonymisation and anonymisation as potential security measures, but the distinction between them is legally and technically significant. Pseudonymisation involves replacing direct identifiers (like names or ID numbers) with codes or tokens while retaining a separate key that can re-identify individuals if necessary. Because re-identification remains possible, pseudonymised data is still considered personal data and remains within the GDPR’s scope, though risks and obligations may be reduced.

Anonymisation, by contrast, refers to processing that irreversibly prevents identification of individuals, even when combined with other reasonably available data. True anonymisation is hard to achieve in practice, especially in a world of rich datasets and powerful analytics. Techniques such as aggregation, k-anonymity, differential privacy, and noise injection can help, but they must be carefully calibrated to avoid re-identification risks. When you hear claims that “data is anonymous,” it is worth asking: under what assumptions, with what external data, and for how long?

Access control mechanisms: Role-Based access control (RBAC) and zero trust architecture

Confidentiality is not just about stopping outsiders; it is also about ensuring that insiders only access the data they genuinely need. Role-Based Access Control (RBAC) addresses this by assigning permissions based on job roles rather than on an ad-hoc individual basis. Properly implemented, RBAC simplifies onboarding and offboarding, reduces the risk of privilege creep, and aligns neatly with the GDPR’s data-minimisation and integrity principles. However, RBAC must be kept up to date as roles evolve, or it can quickly become as messy as the permissions it was designed to replace.

Zero Trust architecture takes the idea further by assuming that no user, device, or network segment is inherently trustworthy—verification is required at every step. Rather than relying on perimeter defences alone, Zero Trust emphasises continuous authentication, micro-segmentation, and least-privilege access. For organisations with distributed workforces and cloud-heavy infrastructures, adopting Zero Trust principles can significantly strengthen confidentiality protections, though it often requires cultural change and careful technical planning.

ISO 27001 certification: information security management systems for compliance

ISO/IEC 27001 provides a globally recognised framework for establishing, implementing, maintaining, and continually improving an Information Security Management System (ISMS). While certification is not mandatory under data protection law, it offers a structured way to align technical and organisational measures with recognised best practice. For many regulators and business partners, ISO 27001 certification serves as a strong indicator that you take confidentiality seriously and manage information risks systematically.

Implementing an ISMS under ISO 27001 forces you to perform risk assessments, define security objectives, assign responsibilities, and document policies and procedures. It also requires regular internal audits and management reviews, embedding security into the organisation’s governance cycles rather than treating it as a one-off project. For organisations seeking to demonstrate GDPR accountability, ISO 27001 can act as a powerful complement, providing both the operational discipline and the audit trail needed to evidence compliance.

Individual rights and confidentiality safeguards in practice

Beyond organisational duties, modern data protection laws grant individuals an extensive suite of rights designed to give them meaningful control over their personal information. These rights are not abstract legal artefacts; they directly shape how you design systems, draft processes, and train staff. Failing to respond properly to a rights request can be just as damaging—legally and reputationally—as a security breach. In an era where users increasingly ask, “What are you doing with my data?” robust rights-handling processes have become a key differentiator of trustworthy organisations.

From a confidentiality standpoint, rights such as access, rectification, erasure, restriction, and objection all require you to know where data resides, who can see it, and how it can be safely altered or deleted. This is where earlier building blocks—RoPA, DPIAs, encryption, and access controls—converge into practical workflows. If your data landscape is opaque or fragmented, honouring these rights within legal timeframes becomes a significant challenge.

Right to erasure (article 17): implementing data deletion across distributed systems

The right to erasure—often called the “right to be forgotten”—allows individuals to request deletion of their personal data in specific circumstances, such as where it is no longer necessary for the original purpose, consent has been withdrawn, or processing is unlawful. Implementing this right in a distributed digital environment is far from trivial. Personal data may exist in production databases, analytics warehouses, backups, log files, and third-party systems, each with different technical constraints and retention policies.

To operationalise erasure, you should first define clear data retention schedules aligned with legal and business requirements, then embed deletion workflows into your systems. This may involve building APIs that propagate erasure requests to downstream processors, designing backup strategies that balance disaster recovery needs with timely overwriting of old data, and tagging records so that they can be selectively removed without corrupting system integrity. Where complete deletion is technically impossible or disproportionately difficult—such as in immutable backups—you may need to document the limitation, apply alternative safeguards (like logical deletion and strict access controls), and explain this transparently to the individual.

Data subject access requests (DSARs): one-month response obligations and exemptions

Data Subject Access Requests (DSARs) give individuals the right to obtain confirmation of whether you process their personal data and, if so, to receive a copy along with information about purposes, categories, recipients, retention periods, and their associated rights. Under the GDPR, you generally have one month to respond, with the possibility of extending by two further months for complex or numerous requests. Charging a fee is only permitted in limited circumstances, such as where requests are manifestly unfounded or excessive.

Efficient DSAR handling requires a combination of process, technology, and judgement. Do you have a central intake mechanism so requests are not lost in personal inboxes? Can you search across systems without manually trawling through every application? Are staff trained to recognise when exemptions apply—for example, to protect the rights of others, legal privilege, or trade secrets—without over-using these carve-outs to shield embarrassing but lawful practices? Getting DSARs right is not only a legal necessity; it is also a powerful way to build trust by showing that you are willing and able to shine a light on your own data practices.

Consent management platforms: cookie consent v3 and legitimate interest balancing

Consent remains a central legal basis for many online activities, particularly in the context of cookies, tracking technologies, and direct marketing. At the same time, regulators across Europe have cracked down on “dark patterns” and non-compliant cookie banners that nudge users into acceptance. Modern consent management platforms (CMPs) aim to solve this by providing configurable interfaces that allow users to granularly accept or reject different categories of cookies, log their preferences, and propagate those choices to underlying scripts and tags.

However, consent is not the only game in town. For some processing activities—like basic web analytics or certain security logs—organisations may rely on legitimate interests instead, provided they conduct and document a careful balancing test between their own interests and individuals’ rights. The art lies in choosing the right basis for each activity and communicating it clearly. Asking for consent where you would rely on legitimate interest can backfire if users say “no” and you later decide to process anyway. Conversely, stretching legitimate interest to cover invasive tracking is likely to attract regulatory scrutiny. A well-designed CMP, combined with thoughtful legitimate interest assessments, helps you navigate this terrain honestly and transparently.

Emerging technologies and confidentiality challenges

As digital ecosystems evolve, so too do the confidentiality questions they raise. Artificial intelligence, hyperscale cloud computing, and decentralised technologies like blockchain all push against the boundaries of existing legal frameworks. Legislators and regulators are racing to keep up, but many organisations are already deploying these tools in production. The result is a moving target: you must interpret traditional principles—such as data minimisation, purpose limitation, and accountability—in contexts that their original drafters never envisaged.

Rather than treating emerging technologies as an excuse to park confidentiality concerns until the law “catches up,” it is safer—and strategically wiser—to apply existing principles proactively. Ask yourself: if a regulator reviewed this AI model, cloud migration, or blockchain deployment in three years’ time, would we be comfortable explaining our design choices and risk assessments? If not, now is the time to rethink them.

Artificial intelligence act compliance: data minimisation in machine learning training sets

Machine learning thrives on data volume and variety, but data protection law insists on minimisation and purpose limitation. The forthcoming EU Artificial Intelligence Act (AI Act) will add another layer of obligations, especially for “high-risk” AI systems used in areas such as credit scoring, recruitment, or access to essential services. These systems will need robust data governance, including documented data quality checks, bias mitigation measures, and traceability of training data sources.

For confidentiality, this means you cannot simply hoover up every available dataset “just in case” it improves model accuracy. You should define clear training objectives, select only the data necessary to achieve them, and consider techniques like federated learning or synthetic data generation to reduce reliance on identifiable information. Regularly reviewing models for drift and unintended inferences—such as reconstructing sensitive attributes from ostensibly non-sensitive inputs—helps ensure that AI does not quietly erode privacy over time. In many respects, responsible AI is simply good data protection with new tools.

Cloud service providers: AWS, azure, and google cloud data residency guarantees

Cloud computing has become the default infrastructure for many organisations, raising important questions about data residency, cross-border transfers, and shared responsibility. Major providers such as AWS, Microsoft Azure, and Google Cloud now offer region-specific data centres, residency guarantees for certain services, and detailed contractual addenda addressing GDPR requirements. Yet the fact that data is stored in a particular region does not, by itself, resolve all confidentiality concerns—especially where foreign governments may claim access under their own laws.

To manage these risks, you should start with a clear cloud strategy: what data will you host where, under which legal bases, and with what encryption and key-management arrangements? Reviewing your provider’s Data Processing Addendum, Standard Contractual Clauses, and technical documentation is essential, but so is understanding your own configuration responsibilities. Misconfigured storage buckets and excessive admin privileges remain leading causes of cloud-related breaches. Ultimately, the law views cloud as an extension of your processing environment, not a dumping ground that magically transfers accountability to someone else.

Blockchain technology and GDPR: reconciling immutability with right to erasure

Blockchain and distributed ledger technologies (DLTs) challenge traditional notions of data control by design. Their immutability—often touted as a core feature—sits uneasily alongside rights like rectification and erasure. If personal data is written directly onto a public blockchain, how can it ever be deleted or corrected? Who is the “controller” in a decentralised network with no central authority? These are not purely theoretical questions; regulators have begun scrutinising blockchain-based identity systems, NFT platforms, and supply chain solutions through a GDPR lens.

Technical and legal workarounds are emerging. One approach is to store only hashed pointers or encrypted references to data on-chain, keeping the underlying personal data off-chain where it can be modified or deleted when necessary. Another is to use permissioned blockchains with governance structures that clearly allocate controller responsibilities and allow for controlled amendments in exceptional circumstances. When exploring blockchain solutions, you should resist the temptation to record more data on-chain than is strictly necessary and involve privacy experts early in the design phase. In a data-driven society, the promise of transparency and traceability must be carefully balanced against the enduring need for confidentiality and individual control.