Visual Data Sharing with Third Parties - Personal Data Protection and Transatlantic Data Transfers Perspective

Łukasz Bonczol
Published: 12/12/2025
Updated: 3/10/2026

Visual data anonymization is the process of altering photos and videos so that individuals and vehicles can no longer be identified, typically through face blurring, license plate blurring, background masking, and metadata removal. Under GDPR and UK GDPR, data that is truly anonymous falls outside the scope of data protection law, provided re-identification is not reasonably likely using all means reasonably likely to be used [1][2].

A black-and-white photo showing a female speaker, dressed elegantly, pointing at a presentation with charts, has a blurred face

Why anonymization matters when publishing images and videos

Images and videos often contain personal data, including faces, license plates, uniforms with names, and recognizable locations. If a person can be singled out or identified, the footage is personal data and processing must meet GDPR or UK GDPR requirements. Anonymization allows organisations to publish, share with agencies, or train internal teams while reducing regulatory exposure. However, anonymization must be robust. If a subject can still be identified by context or metadata, the output remains personal data [1][4].

Two female employees are sitting at a table with a screen displaying sales results, observing, while the employee leading the meeting talks to them, black-and-white photo

Techniques that commonly support compliance

Face blurring and license plate blurring reduce direct identifiers. Background masking or selective redaction removes secondary cues like distinctive tattoos, building numbers, or workstation screens. Metadata removal strips EXIF and IPTC tags that could reveal device IDs, GPS coordinates, or timestamps that enable linkage. Organisations often combine these methods and test outputs for re-identification risk as a business practice.

In the black-and-white photo, there is a woman with a blurred face holding a tablet and a microphone, giving a presentation

The following scenarios may exist in some jurisdictions and are context-dependent (typically under image rights / personality rights or similar regimes). They do not override data protection obligations where personal data remains identifiable.

  1. The person is widely known (a public figure), and the image was taken in connection with their public role.
  2. The person appears only as a part of a larger scene, such as a landscape or public event, and is not the main subject.
  3. The person was paid to pose (e.g., a model) and the agreed scope of use includes publication; this depends on the contract/terms and applicable local law.

Whether these apply depends on local law, copyright and image rights, and the specific facts. Where doubt remains, organisations often consider anonymization or a different lawful basis to reduce risk.

black-and-white photo, an organizational meeting with sales results is being conducted, the attendees are sitting at desks and the presenter is pointing at charts with their face anonymized

Workflow for publishing visual content with third parties

  1. Define purpose and audience. Clarify whether the visuals will be public, shared with vendors, or used internally.
  2. Minimise at capture. Avoid filming close-ups of faces and plates when not needed, and avoid capturing unnecessary sensitive scenes.
  3. Detect and blur. Apply face blurring and license plate blurring. Consider background masking for distinctive features that aid re-identification.
  4. Remove metadata. Strip EXIF/IPTC and thumbnails. Standardise filenames so they do not encode personal data.
  5. Validate quality. Sample frames to check for false negatives and false positives. Adjust thresholds and re-run when needed.
  6. Decide deployment. Prefer on-premise software to process footage locally and minimise outbound transfers. This supports security and reduces reliance on cross-border data flows.
  7. Document decisions. Record the technique settings, residual risk assessment, and checks performed. This is a common compliance approach for auditability.
  8. Share with third parties using least data. Provide anonymized exports. Where original footage must be shared, use encryption, short retention, and contractually limit downstream use.

Check out Gallio PRO for on-premise visual data anonymization aligned with these steps.

A black-and-white photo with the faces anonymized, depicting a business meeting

GDPR vs UK GDPR for publishing photos and videos

Topic

EU GDPR

UK GDPR

 

Anonymous data

Outside scope if re-identification is not reasonably likely [1]

Outside scope on the same principle (UK GDPR; interpreted consistently with the GDPR concept of anonymisation) [2]

Lawful basis for identifiable images

Purpose-dependent. Often legitimate interests or consent, assessed case by case [1]

Similar assessment under UK GDPR and DPA 2018 [2]

Special category cues

If visuals reveal health, religion, or political views, additional conditions may apply (Art. 9 GDPR) [1]

Similar constraints under UK GDPR Art. 9 and DPA 2018 Schedule 1 conditions [2]

International transfers

Use adequacy, SCCs, or EU-US DPF for US recipients with supplementary measures where needed [1][3][6]

Use UK adequacy regulations, IDTA or UK Addendum, and a transfer risk assessment where needed [2]

CCTV guidance

EDPB and national authorities stress necessity, minimisation, and transparency [1]

ICO CCTV guidance provides detailed expectations for video imagery [5]

A black-and-white photo showing a lecture for teenagers in a library, the lecturer is drawing a mind map on the board, their face is blurred

Transatlantic data transfers when sharing visuals

When photos or videos remain personal data after processing, sharing them with a US vendor is a Chapter V transfer under GDPR. Options include: (1) the EU-US Data Privacy Framework for certified US recipients [6], (2) Standard Contractual Clauses with transfer risk assessment and supplementary measures where needed [3], or (3) an adequacy decision where applicable. Under UK GDPR, similar tools apply through adequacy regulations and the International Data Transfer Agreement (IDTA) or the UK Addendum to the EU SCCs, supported by a transfer risk assessment where needed [2]. On-premise software helps by avoiding transfers altogether, or by allowing anonymization before any export so that the shared files are outside data protection scope if robustly anonymized [1][2].

Download a demo to evaluate on-premise processing that keeps raw footage within your environment.

In the colorless photo, you can see a leader with a hooded face at a business meeting

Risk points businesses often miss

  • Re-identification risk. Even after face blurring, subjects can be identified from clothing, gait, location uniqueness, or recurring patterns across posts. Risk depends on who will access the footage and what external data they may combine it with [4].
  • Background identifiers. Whiteboards, workstation screens, door signs, uniforms with names, and unique interiors can expose identity. Background masking or selective redaction is frequently required for enterprise publishing.
  • Metadata. EXIF GPS coordinates, device serials, time and date, and editing history can link to a person or place. Removing metadata before publication is a common compliance approach described in regulators’ good-practice materials (noting that guidance may differ by context) [5].
  • Detection errors. Automated face and plate detectors may miss small, occluded, or angled subjects. False negatives create compliance risk. False positives can over-redact and reduce utility. Quality assurance on representative samples and maintaining versioned processing settings are effective safeguards.
  • Children in footage. Children’s images increase risk and public sensitivity. Where identification remains possible, organisations often adopt stricter minimisation and validation before sharing or publishing.

The black-and-white photo has been anonymized. It shows a woman pointing at charts on a board.

Tooling choices and on-premise software

Cloud services can be efficient but may trigger cross-border transfer analysis and additional security requirements. On-premise software allows processing directly on secure infrastructure, supports data locality controls, and simplifies vendor risk management for raw footage. It also enables pre-transfer anonymization so that downstream content is likely outside GDPR scope if robustly anonymized. Performance and cost outcomes are context-dependent and vary by video resolution, scene complexity, and automation level.

Contact us to discuss on-premise visual data anonymization aligned with internal security policies.

An amateurish graphic depicting a little figure holding its head, with three question marks on the right, each in a different style, the graphic has been desaturated.

FAQ - Visual Data Sharing with Third Parties

Q1: Does face blurring always make footage anonymous?

Not always. If people remain identifiable via clothing, context, or recurring appearances across posts, the footage may still be personal data. Additional masking and metadata removal are often needed [1][4].

Q2: When should license plate blurring be applied?

Whenever number plates are visible and the footage will be shared or published. Plates can be personal data (in particular where they enable identification directly or indirectly), and usually require redaction unless a clear lawful basis applies and the risk is acceptable.

Q3: Are automated tools enough without human review?

Automated tools reduce workload, but detection errors occur. A proportionate human review of samples is a common compliance approach, especially for high-risk or public releases.

Q4: How does on-premise software help with data transfers?

Processing locally avoids sending raw visuals to external processors. If anonymization is robust before any export, the resulting files may be outside GDPR or UK GDPR transfer rules because they are no longer personal data [1][2].

Q5: Do the three exceptions remove all GDPR duties?

No. The listed exceptions concern image rights and are context-dependent. If individuals remain identifiable, GDPR or UK GDPR obligations can still apply, including transparency and minimisation.

Q6: What should be removed from metadata?

Common practice includes removing GPS coordinates, device IDs, timestamps, creator fields, and embedded thumbnails that could enable linkage.

Q7: Is a DPIA required for publishing videos?

It depends on scale, context, and risk. Large-scale systematic monitoring or other high-risk processing may trigger a DPIA under GDPR or UK GDPR. Anonymization can reduce residual risk.

References list

  1. [1] Regulation (EU) 2016/679 (General Data Protection Regulation), including Recital 26, Articles 4 and 44-49.
  2. [2] UK GDPR and the Data Protection Act 2018.
  3. [3] European Data Protection Board, Recommendations 01/2020 on measures that supplement transfer tools to ensure compliance with the EU level of protection of personal data.
  4. [4] Article 29 Working Party, Opinion 05/2014 on Anonymisation Techniques.
  5. [5] UK Information Commissioner’s Office, Guidance on CCTV and surveillance systems, and “What is personal data?”
  6. [6] European Commission Implementing Decision of 10 July 2023 on the adequacy of the protection provided by the EU-US Data Privacy Framework.