
ResourceSpace has changed the way the DEC uses content, making it much easier for us to quickly make assets available both internally and externally during our emergency appeals.
Blog
12th July 2024
Since we rely on digital services more than ever before, more and more aspects of our lives require access to online software or applications.
From emails and social media, to accessing government services and essential utilities, our personal identifiable information is transferred between multiple organisations—and with it comes security concerns.
Increasingly, our biometric data is being used for access to these services, and if you’re a business or institution that processes and stores customer biometric data, how can you manage biometric privacy concerns and consent?
Biometric data refers to unique physical or behavioral characteristics of individuals used for identification and authentication purposes.
These characteristics include but are not limited to:
Because these traits are inherently unique to each person, biometric data provides a highly reliable method of verifying identity. It’s commonly employed in security systems - for example, unlocking smartphones, accessing secure facilities, and in government identification programs like passports and national ID cards.
The use of biometric data enhances security by reducing the risk of identity theft and unauthorised access, as it’s much harder to replicate or steal compared to traditional passwords or PINs. However, the collection and storage of biometric data raises privacy and ethical concerns, as it involves sensitive personal information that, if misused or breached, can lead to significant privacy violations.
According to Article 9 of UK GDPR, organisations have to make special considerations for processing ‘special categories of personal data’. This includes personal data that:
Because biometric data is considered to be personal information by the ICO, if you’re a British organisation you must comply with data protection law when you process it, and explicit consent is likely to be the most appropriate lawful basis to do so.
Importantly, UK GDPR states that “if you can’t identify a valid condition, you must not use special category biometric data.”
The six lawful bases for processing personal information are consent, execution of a contract, a legal obligation, vital interests, a public task and legitimate interest, and all of these apply to ‘special categories of personal data’ too. However, under Article 9, there are an additional four lawful bases for processing biometric data. They are:
If you’re an organisation that processes the personal information of EU citizens, you can find out what you have to do to secure biometric data consent here.
READ MORE: How UK GDPR impacts Digital Asset Management
Artificial intelligence is having a big impact on every digital industry, and how organisations handle biometric data is no different.
AI has the ability to analyse and process vast amounts of biometric data, while machine learning algorithms can detect patterns, anomalies and irregularities within biometric data. AI also constantly evolves, meaning an AI-powered system can adapt to changes in specific biometric characteristics to ensure they are as robust and secure as possible. For example, machine learning algorithms can detect patterns, anomalies and irregularities within biometric data, which helps in identifying and preventing fraudulent activities.
However, there are also ethical implications and biometric privacy concerns that come with the use of AI technology.
First of all, there’s the risk of unauthorised access or data breaches when AI algorithms process and analyse sensitive information.
Secondly, there’s serious questions around bias and discrimination when AI algorithms are used for processing biometric data. If the training data used for AI models is biased, it can result in inaccurate identification or verification.
Using AI for processing biometric data also risks breaching your legal obligations under UK GDPR. While human-controlled processes can be managed more strictly, AI can be susceptible to ‘function creep’, where the use of biometric data expands beyond the original purpose without the individual’s explicit consent.
For biometric data consent processes to be effective and compliant with GDPR, they must be clear, accessible and revocable. They shouldn’t be hidden in contract clauses, but instead easily understandable in a standalone policy document, and presented in a way that allows customers, users, employees and other stakeholders whose biometric data you need to process to digest the information and make an impartial decision.
Let’s take a look at ten specific strategies in more detail.
Ready to find out how ResourceSpace can help your organisation manage consent management and overcome those challenges? Click below to book your free Digital Asset Management demo and see our privacy functionality in action.
#Consent
#GDPR
#DataProtection
#LegalCompliance
#DataBreach
#DataProcessing
#IndustryNews