# Why Data Protection for Companies Is Critical in a Connected World
In an era where digital transformation accelerates at an unprecedented pace, the protection of corporate data has evolved from a compliance checkbox into a fundamental business imperative. Organizations across all sectors now operate in an environment where personal and commercial information flows seamlessly across borders, cloud infrastructures, and interconnected systems. This hyper-connected landscape presents extraordinary opportunities for innovation, efficiency, and growth, yet it simultaneously exposes businesses to an expanding array of vulnerabilities. The stakes have never been higher: a single data breach can trigger regulatory penalties reaching millions of pounds, irreparably damage customer trust built over decades, and derail strategic initiatives. For decision-makers navigating this complex terrain, understanding the multifaceted dimensions of data protection isn’t merely about avoiding penalties—it’s about securing competitive advantage, maintaining operational resilience, and demonstrating the ethical stewardship that stakeholders increasingly demand.
GDPR compliance requirements and Cross-Border data transfer mechanisms
The General Data Protection Regulation fundamentally reshaped how organizations approach personal data processing, establishing principles that now influence privacy frameworks worldwide. At its core, the GDPR mandates that businesses demonstrate accountability through documented processes, transparent communications, and robust technical safeguards. Organizations must identify their lawful basis for processing—whether consent, contractual necessity, legitimate interests, or another specified ground—and apply this consistently across all data activities. The regulation’s territorial scope extends beyond the European Economic Area, capturing any entity that offers goods or services to EU residents or monitors their behaviour, regardless of where that entity is established.
When transferring personal data outside the EEA, companies face particularly stringent requirements designed to ensure that protective standards travel with the data. The European Commission has recognized certain countries as providing adequate protection, enabling transfers without additional safeguards. However, for transfers to nations without adequacy decisions—including the United States following the Schrems II ruling’s invalidation of Privacy Shield—organizations must implement alternative mechanisms. These requirements create significant operational complexity for multinational enterprises managing global data flows, necessitating careful legal analysis and technical implementation to maintain both compliance and business functionality.
Standard contractual clauses (SCCs) under schrems II ruling
Standard Contractual Clauses represent pre-approved contractual terms between data exporters and importers that establish binding obligations to protect personal data. Following the Schrems II judgment in July 2020, which invalidated the EU-US Privacy Shield framework, SCCs gained renewed prominence as a primary transfer mechanism. However, the Court of Justice simultaneously clarified that SCCs alone may not suffice—organizations must conduct transfer impact assessments evaluating whether the destination country’s laws could enable public authorities to access data in ways incompatible with EU standards. This assessment obligation requires detailed analysis of the importing country’s surveillance laws, data access procedures, and available legal remedies.
The European Commission updated the SCCs in June 2021 to address these concerns, introducing modular clauses covering different transfer scenarios and enhanced requirements for supplementary measures. Organizations must now document their assessment process, implement additional technical safeguards such as encryption or pseudonymization where necessary, and maintain ongoing monitoring of legal developments in destination countries. For businesses operating complex international supply chains, this means reviewing potentially hundreds of data transfer arrangements and implementing layered protection strategies that combine contractual, organizational, and technical measures to bridge any protection gaps identified through the assessment process.
Data processing agreements (DPAs) with Third-Party vendors
Every relationship where an external entity processes personal data on behalf of a controller requires a comprehensive Data Processing Agreement establishing the processor’s obligations, limitations, and liabilities. Article 28 of the GDPR specifies mandatory DPA provisions, including processing scope limitations, confidentiality commitments, security requirements, sub-processor engagement procedures, and assistance obligations for handling data subject rights requests. These agreements must address technical and organizational measures proportionate to the processing risks, with specificity sufficient to demonstrate regulatory compliance during audits or investigations.
The practical challenge lies in negotiating DPAs with vendors ranging from major cloud infrastructure providers to specialized analytics firms, each presenting different risk profiles and negotiating leverage. Large technology companies typically offer standardized DPAs reflecting their global operations, leaving limited room for customization. In contrast, smaller vendors may require substantial education about GDPR requirements and capacity building to meet contractual obligations. Organizations should establish vendor assessment frameworks evaluating security certifications, breach notification procedures, data location practices, and subprocessor chains before
entering into contractual commitments. Beyond signing the DPA, companies should periodically review vendors’ security posture, request evidence of compliance (such as ISO 27001 certification or SOC 2 reports), and define clear escalation paths for incident reporting. By treating vendor management as an extension of their own data governance framework, organizations can reduce third-party risk while maintaining the agility that outsourced services provide.
Binding corporate rules (BCRs) for multinational enterprises
Binding Corporate Rules offer a powerful mechanism for large multinational groups that routinely transfer personal data between entities located inside and outside the EEA. Unlike SCCs, which operate on a contractual basis between specific exporters and importers, BCRs function as an internal code of conduct approved by EU data protection authorities and binding across the corporate group. Once approved, they provide a scalable, long-term solution for intra-group data transfers, reducing the administrative burden of managing hundreds of bilateral agreements.
Implementing BCRs, however, is neither quick nor trivial. Organizations must demonstrate that they have robust privacy governance structures, comprehensive training programmes, effective complaint-handling procedures, and independent audit mechanisms. They also need to show that data subjects can enforce their rights and obtain redress, even when processing occurs in jurisdictions with weaker privacy regimes. For global enterprises that rely heavily on shared service centres, centralised analytics, or global HR platforms, the upfront investment in BCRs can pay dividends in operational efficiency and regulatory certainty over time.
EU-US data privacy framework implementation strategies
The EU-US Data Privacy Framework (DPF) emerged as the latest attempt to facilitate lawful transatlantic data flows following the invalidation of Safe Harbor and Privacy Shield. Under the DPF, US organisations can self-certify to a set of privacy principles enforced by the US Federal Trade Commission, thereby enabling EU-based controllers to transfer personal data to them without relying on SCCs. For European companies, using a DPF-certified partner can significantly simplify compliance, particularly for standard SaaS and cloud services that underpin daily operations.
However, Schrems II made clear that adequacy-like mechanisms are not beyond challenge, and prudent organisations should adopt a layered approach rather than depending solely on the DPF. This means verifying that partners remain current on the DPF list, mapping what types of data are transferred, and considering supplementary technical measures such as strong encryption, data minimisation, and strict access controls. By combining DPF participation with internal risk assessments and privacy-by-design principles, companies can maintain business agility while preparing for possible future legal shifts in cross-border transfer rules.
Cybersecurity threat landscape targeting corporate infrastructure
While regulatory frameworks define what “good” looks like on paper, the reality on the ground is shaped by an increasingly hostile cybersecurity threat landscape. Attackers today range from financially motivated cybercriminals to highly resourced nation-state actors, all exploiting weaknesses in corporate infrastructure, human behaviour, and third-party ecosystems. For many organisations, the question is no longer whether they will be targeted, but when—and how well they will withstand the attack. Understanding the main attack vectors is the first step towards building a resilient data protection strategy.
Ransomware attack vectors: WannaCry and NotPetya case studies
Ransomware remains one of the most disruptive cyber threats to corporate operations. The WannaCry outbreak in 2017 exploited a vulnerability in Microsoft’s SMB protocol (EternalBlue) to spread rapidly across networks worldwide, crippling hospitals, manufacturers, and public bodies. Many victims had failed to apply available security patches or segment their networks, allowing the malware to propagate unhindered and encrypt critical data. The incident illustrated how a single unpatched system can act as a gateway to enterprise-wide compromise.
NotPetya, which followed shortly thereafter, was even more devastating. Disguised as ransomware but widely considered a destructive wiper attack linked to a nation-state actor, NotPetya leveraged compromised software updates to infiltrate organisations, particularly in Ukraine, before spreading globally. Companies such as Maersk and Merck suffered hundreds of millions of dollars in damages and weeks of operational disruption. These case studies highlight that defending against ransomware is not just about backups; it requires patch management discipline, network segmentation, application whitelisting, and rigorous incident response planning.
Advanced persistent threats (APTs) and Nation-State actors
Advanced Persistent Threats differ from opportunistic cybercrime in that they are targeted, stealthy, and frequently backed by state resources. APT groups often spend months or even years inside a victim’s environment, conducting reconnaissance, escalating privileges, and exfiltrating sensitive intellectual property or strategic data. Common targets include defence contractors, critical infrastructure providers, financial institutions, and technology firms whose assets have geopolitical or economic value.
Because APTs blend into normal network traffic and leverage legitimate credentials, traditional perimeter defences are often insufficient. Organisations need continuous monitoring, behavioural analytics, and threat intelligence to identify subtle indicators of compromise. Equally important is a mature identity and access management strategy—limiting lateral movement through least-privilege access, segmenting high-value systems, and rapidly revoking compromised accounts. For boards and executives, APT risk is not a purely technical concern; it is a strategic issue that can affect national security relationships, supply chains, and long-term competitiveness.
Phishing and social engineering exploitation techniques
Despite advances in technical controls, many breaches still begin with a simple phishing email. Attackers craft messages that mimic trusted brands, colleagues, or suppliers, enticing recipients to click malicious links, open infected attachments, or divulge credentials. More sophisticated campaigns, such as business email compromise (BEC), involve careful research into corporate hierarchies and payment processes, enabling criminals to redirect funds or obtain sensitive data with a single convincing request.
Social engineering is effective because it targets human psychology rather than system vulnerabilities. People are naturally inclined to trust apparent authority figures, respond to urgency, and help colleagues. Countering these tactics requires more than an annual training slide deck. Organisations should deploy simulated phishing campaigns, just-in-time awareness prompts, and clear reporting channels that encourage employees to question suspicious communications. When staff feel empowered to slow down and verify requests—especially those involving payments, password resets, or data exports—the entire organisation becomes a more resilient barrier against data compromise.
Zero-day vulnerabilities in enterprise software systems
Zero-day vulnerabilities—flaws unknown to the software vendor at the time of exploitation—pose a particularly insidious risk to corporate data protection. Because no patch exists when attackers first weaponise the bug, organisations may be exposed even if they follow best-practice patching cadences. High-profile examples, such as the Log4Shell vulnerability in the widely used Log4j logging library, revealed how deeply embedded components can be across enterprise applications, making detection and remediation a complex undertaking.
Mitigating zero-day risk requires a layered defence strategy. Network intrusion detection systems, web application firewalls, and endpoint detection and response (EDR) tools can sometimes identify anomalous behaviour associated with exploitation attempts, even before a formal fix is released. Asset inventories and software bills of materials (SBOMs) help security teams quickly identify where vulnerable components reside once advisories emerge. In essence, zero-day management is about preparedness: knowing your environment, monitoring it intelligently, and having the processes in place to respond rapidly when the inevitable disclosure occurs.
Data breach financial consequences and litigation exposure
When a data breach occurs, the impact extends far beyond the immediate technical clean-up. Regulators, customers, shareholders, and plaintiffs’ lawyers all scrutinise how the organisation prepared for, detected, and responded to the incident. The resulting financial consequences can include regulatory fines, compensation claims, operational downtime, remediation costs, and long-term reputational damage. Understanding these dimensions helps leaders justify investment in preventative controls and robust incident response plans.
GDPR fine structures: amazon luxembourg and british airways penalties
The GDPR introduced a tiered fine structure that can reach up to 20 million euros or 4% of a company’s worldwide annual turnover, whichever is higher, for the most serious infringements. Supervisory authorities have demonstrated a willingness to use these powers in practice. In 2021, Amazon Europe Core S.à r.l. (based in Luxembourg) reportedly faced a record GDPR fine of 746 million euros related to alleged breaches of transparency and consent obligations in targeted advertising. Although the company has contested the decision, the headline figure underscored the scale of potential exposure for large data-driven businesses.
British Airways provides another instructive example. Following a 2018 breach that compromised the personal and payment data of around 400,000 customers, the UK Information Commissioner’s Office initially proposed a fine of £183 million before ultimately issuing a reduced penalty of £20 million. The ICO cited BA’s failure to implement basic security measures, such as multi-factor authentication and sufficiently robust monitoring, as aggravating factors. For other organisations, these cases highlight the importance of being able to demonstrate “appropriate technical and organisational measures” tailored to the risk profile of their processing activities.
Class action lawsuits following equifax and marriott incidents
Regulatory fines are only one part of the financial equation. In many jurisdictions, large-scale breaches also trigger class action litigation brought on behalf of affected individuals. The Equifax breach in 2017, which exposed sensitive credit data for approximately 147 million people, ultimately led to a settlement package in the United States valued at up to $700 million, including compensation for consumers, civil penalties, and credit monitoring services. The company also incurred substantial internal remediation and technology upgrade costs.
Similarly, Marriott faced class actions and regulatory scrutiny after attackers compromised its Starwood guest reservation database, exposing records relating to hundreds of millions of guests. Lawsuits alleged that the company failed to conduct adequate due diligence during its acquisition of Starwood and did not remediate known vulnerabilities in a timely manner. These examples illustrate that data protection is not only a security and compliance issue; it also intersects with corporate transactions, due diligence processes, and long-term legal liability planning.
Cyber insurance premium calculations and coverage limitations
Many organisations look to cyber insurance as a way to mitigate financial risk arising from data breaches and cyber incidents. While cyber policies can indeed cover incident response costs, business interruption losses, and certain liabilities, they are not a substitute for strong cybersecurity controls. Insurers increasingly scrutinise applicants’ security postures—examining factors such as MFA deployment, patching practices, backup strategies, and employee training—when calculating premiums and deductibles. Poor controls can result in higher costs or even outright refusal of coverage.
Moreover, policy limitations and exclusions can surprise unprepared organisations. Some policies exclude coverage for nation-state attacks, regulatory fines, or certain types of contractual liabilities. Others may cap compensation for business interruption or impose strict notification and cooperation obligations during an incident. Treating cyber insurance as one component of a broader risk management framework, rather than a silver bullet, ensures that organisations do not discover critical gaps at the very moment they most need support.
Shareholder value erosion Post-Breach disclosure
Beyond immediate costs, data breaches can materially erode shareholder value. Studies have shown that, on average, publicly traded companies suffer a noticeable drop in share price following the announcement of a major incident, with some underperforming their sector indices for months or years afterwards. Investors increasingly view weak data protection as a governance failure, raising questions about board oversight, risk management maturity, and long-term strategic resilience.
In this context, cyber and data protection have become core environmental, social, and governance (ESG) issues. Institutional investors and proxy advisors may scrutinise disclosure around cybersecurity practices, demand enhanced reporting, or even vote against directors perceived as inattentive to digital risk. For executives, treating data protection as a board-level priority is therefore not only prudent from a security perspective; it is also essential to safeguarding enterprise value and access to capital.
Enterprise data loss prevention (DLP) technologies and protocols
Given the scale of modern data flows—across email, endpoints, cloud services, and third-party integrations—manual controls alone cannot prevent unauthorised disclosure. Enterprise Data Loss Prevention technologies help organisations monitor, detect, and block sensitive information as it moves through and beyond corporate boundaries. When aligned with clear data classification policies and user education, DLP can significantly reduce the risk of accidental leakage and deliberate exfiltration alike.
Network DLP solutions: forcepoint and symantec implementations
Network DLP tools, such as those offered by Forcepoint and Broadcom (Symantec), sit at key chokepoints in the organisation’s infrastructure—email gateways, web proxies, or network egress points—to inspect traffic for sensitive data patterns. These solutions can identify credit card numbers, national identifiers, health information, or custom-defined data elements using content inspection and contextual analysis. Depending on policy, they may automatically block, quarantine, or flag suspicious transmissions for review.
Implementing network DLP effectively requires more than simply turning on predefined rules. Organisations must first understand where their most sensitive data resides, how it legitimately moves, and which channels pose the greatest risk. Policies should be tuned to minimise false positives that frustrate users while still catching genuine violations. Over time, analytics derived from DLP alerts can inform broader process improvements—for example, redesigning workflows that currently rely on emailing spreadsheets of personal data when more secure alternatives exist.
Endpoint protection platforms with data classification capabilities
As remote and hybrid work models proliferate, endpoints such as laptops and mobile devices have become critical control points for data protection. Modern endpoint protection platforms increasingly incorporate DLP and data classification capabilities that operate directly on the device. These tools can prompt users to label documents according to sensitivity, automatically apply encryption, or prevent copying data to removable media or unauthorised cloud services.
Embedding data protection at the endpoint encourages responsible behaviour at the moment data is created or modified. It also recognises that many breaches arise not from malicious insiders, but from lost devices, misdirected files, or hurried workarounds. By guiding employees with contextual prompts—rather than simply blocking actions without explanation—organisations can reinforce a culture of privacy while maintaining productivity.
Cloud access security brokers (CASBs) for SaaS application monitoring
With the shift to Software-as-a-Service, sensitive corporate data increasingly resides in platforms that sit outside the traditional network perimeter. Cloud Access Security Brokers act as intermediaries between users and cloud services, providing visibility, control, and threat protection for data stored in tools like Microsoft 365, Google Workspace, Salesforce, and countless niche SaaS applications. CASBs can discover unsanctioned “shadow IT” usage, enforce granular access policies, and apply DLP rules to content stored or shared in the cloud.
For organisations embracing cloud-first strategies, CASBs are a cornerstone of modern data protection architecture. They help answer critical questions: Which SaaS tools are in use? Who is accessing what data, from where, and on which devices? Are employees sharing confidential documents externally or synchronising them to personal storage? By integrating CASB insights with SIEM, IAM, and incident response workflows, companies can maintain control over cloud data without undermining the agility that SaaS solutions deliver.
Database activity monitoring (DAM) and encryption at rest
Structured databases often hold the most sensitive corporate information, from customer records and payment details to trade secrets and financial data. Database Activity Monitoring solutions observe queries and transactions in real time, flagging anomalous behaviour such as mass exports, unusual access times, or queries issued by privileged accounts outside their normal patterns. DAM can alert security teams to potential insider threats or compromised accounts before large-scale exfiltration occurs.
Encryption at rest provides an additional safeguard, ensuring that if storage media or backups are lost or stolen, the underlying data remains unreadable without the appropriate keys. Effective encryption programmes must be paired with strong key management practices—segregating duties, rotating keys, and restricting access to key material. Together, DAM and encryption at rest form a robust last line of defence for critical data stores, limiting the damage even when other controls fail.
Identity and access management (IAM) architecture for data governance
Because most attacks ultimately aim to abuse credentials or access rights, identity has become the new security perimeter. A mature Identity and Access Management architecture ensures that only the right individuals—and systems—can access the right resources, for the right reasons, at the right time. From a data governance perspective, IAM operationalises principles such as least privilege, segregation of duties, and accountability, which are central to both security frameworks and data protection regulations.
Zero trust network access (ZTNA) framework implementation
Zero Trust is often summarised as “never trust, always verify.” Rather than assuming that users or devices inside the corporate network are inherently trustworthy, Zero Trust Network Access requires continuous authentication and authorisation for every access request, regardless of location. In practice, ZTNA solutions provide application-level access based on user identity, device posture, and contextual signals, replacing broad VPN access with more granular controls.
Implementing Zero Trust is a journey rather than a single project. Organisations typically start by identifying critical applications and high-risk user groups, then progressively reduce implicit trust by segmenting networks, tightening access controls, and integrating identity, endpoint, and network telemetry. The result is a security posture that is far more resilient to credential theft, lateral movement, and supply chain compromises. From a privacy standpoint, Zero Trust also supports data minimisation by limiting unnecessary exposure of systems and information.
Multi-factor authentication (MFA) and privileged access management (PAM)
Multi-Factor Authentication is one of the most cost-effective controls for preventing account compromise. By requiring users to provide two or more authentication factors—such as something they know (a password), something they have (a token or phone), or something they are (biometrics)—MFA drastically reduces the success rate of phishing and credential stuffing attacks. Regulators and industry bodies increasingly expect MFA for remote access, administrative accounts, and systems processing sensitive personal data.
Privileged Access Management complements MFA by specifically governing high-level accounts that can alter configurations, access large volumes of data, or disable security controls. PAM solutions centralise the management of privileged credentials, enforce just-in-time access, record administrative sessions, and rotate passwords automatically. By tightly controlling who can perform powerful actions—and creating an auditable trail when they do—PAM helps prevent both malicious abuse and accidental errors that could compromise data protection.
Role-based access control (RBAC) versus Attribute-Based access control (ABAC)
Defining who can access which data is often approached through Role-Based Access Control, where permissions are assigned to roles (such as “HR manager” or “finance analyst”) and users inherit rights based on their job function. RBAC is intuitive and relatively simple to administer, making it suitable for many core business systems. However, it can become rigid or overly permissive when users perform multiple roles or when access needs to consider additional factors like location, device type, or project membership.
Attribute-Based Access Control offers a more dynamic alternative. Instead of relying solely on roles, ABAC evaluates a combination of attributes—about the user, the resource, the action, and the context—to make fine-grained access decisions. For example, a policy might allow a clinician to view patient records only when on a managed device within a specific jurisdiction and assigned to that patient’s care team. While ABAC can be more complex to design and implement, it aligns closely with privacy requirements that emphasise purpose limitation and contextual risk. Many organisations adopt a hybrid model, using RBAC for broad entitlements and ABAC for high-risk or highly regulated data sets.
Incident response planning and business continuity frameworks
No organisation can guarantee immunity from security incidents, but they can control how effectively they respond. A well-defined incident response plan, integrated with broader business continuity and disaster recovery frameworks, enables companies to contain damage, restore operations, and meet legal obligations when the unexpected occurs. Regulators increasingly expect documented, tested plans, recognising that preparedness is a key indicator of responsible data stewardship.
NIST cybersecurity framework and ISO 27001 alignment
The NIST Cybersecurity Framework and ISO 27001 standard provide structured approaches for managing information security risk across the lifecycle of “identify, protect, detect, respond, and recover.” While NIST offers a flexible, outcome-focused framework, ISO 27001 sets out requirements for establishing, implementing, maintaining, and continually improving an information security management system (ISMS). Aligning with one or both can help organisations prioritise investments, demonstrate due diligence, and create a common language between technical teams and leadership.
In practice, many organisations map their existing controls and processes to these frameworks, identifying gaps and developing a roadmap for improvement. Regular internal audits, management reviews, and corrective action plans ensure that incident response capabilities evolve alongside the threat landscape and business changes. Certification to ISO 27001 can also provide external assurance to customers, partners, and regulators that security and data protection are being managed systematically rather than ad hoc.
Security information and event management (SIEM) tools: splunk and QRadar
Effective incident response depends on timely detection, yet many organisations struggle with fragmented logs and alert fatigue. Security Information and Event Management platforms such as Splunk and IBM QRadar aggregate data from across the enterprise—firewalls, endpoints, applications, cloud services—and apply correlation rules and analytics to identify suspicious patterns. When configured well, SIEMs can surface anomalies that would otherwise go unnoticed, such as unusual login locations, data exfiltration attempts, or privilege escalations.
However, SIEMs are not magic boxes; they require thoughtful tuning, continuous maintenance, and skilled analysts to interpret findings. Organisations should start with clear use cases aligned to their highest risks, gradually expanding coverage as they build maturity. Integrating SIEM alerts with incident response playbooks and ticketing systems helps ensure that potential breaches are investigated promptly and consistently, reducing dwell time and limiting the impact on personal and corporate data.
Digital forensics and chain of custody documentation procedures
When a serious incident occurs, digital forensics play a crucial role in understanding what happened, which data was affected, and how to prevent recurrence. Forensic investigators collect and analyse evidence from systems, logs, and devices, reconstructing attacker activity and identifying the root cause. To ensure that this evidence can support regulatory investigations or legal proceedings, a clear chain of custody must be maintained—documenting who handled which artefacts, when, and under what conditions.
Establishing forensic readiness in advance can significantly improve the quality and speed of investigations. This includes defining evidence retention policies, configuring systems to generate appropriate logs, and training internal teams or pre-contracting external specialists. By treating digital forensics as an integral part of incident response planning, rather than an afterthought, organisations enhance their ability to learn from incidents, meet reporting obligations, and defend their decisions if challenged.
Disaster recovery as a service (DRaaS) for critical data assets
Even with strong preventive controls, some incidents—whether cyberattacks, hardware failures, or natural disasters—will disrupt normal operations. Disaster Recovery as a Service solutions provide cloud-based replication and failover capabilities that allow organisations to restore critical systems and data rapidly in an alternate environment. Rather than maintaining duplicate infrastructure in-house, companies can leverage DRaaS providers to achieve ambitious recovery time objectives (RTOs) and recovery point objectives (RPOs) in a cost-effective manner.
To be effective, DRaaS must be underpinned by rigorous planning and regular testing. Business impact analyses help identify which applications and data sets are truly mission-critical, ensuring that limited resources are focused where downtime would be most damaging. Scheduled failover exercises validate that backups are usable, configurations are current, and staff know their roles under pressure. When combined with robust cybersecurity, IAM, and DLP controls, DRaaS forms a vital safety net—ensuring that, even in the face of severe disruption, organisations can continue to operate, serve customers, and protect the data entrusted to them.