Failures in Face Recognition

Interesting article on people with nonstandard faces and how facial recognition systems fail for them.

Some of those living with facial differences tell WIRED they have undergone multiple surgeries and experienced stigma for their entire lives, which is now being echoed by the technology they are forced to interact with. They say they haven’t been able to access public services due to facial verification services failing, while others have struggled to access financial services. Social media filters and face-unlocking systems on phones often won’t work, they say.

It’s easy to blame the tech, but the real issue are the engineers who only considered a narrow spectrum of potential faces. That needs to change. But also, we need easy-to-access backup systems when the primary ones fail.

Posted on October 22, 2025 at 7:03 AM18 Comments

Comments

Frederick Page October 22, 2025 7:43 AM

I recently have gotten a facial injury from falling on an object. At least two facial systems that were trained to my uninjured face still do not recognize my current face, even though it’s now just a small bruise.

kiwano October 22, 2025 9:39 AM

@Andrew Stur:

Depends on the metric that you use for determining the narrowness of the spectrum. If you’re (appropriately) basing the metric on fundamental variables like “how far the nose is from the centreline of the face, measured in pupilary distances” (which itself presupposes that the face contains 2 recognizable pupils, and puts extremely narrow bounds on “how many recognizable pupils are there” as a metric or part thereof), then it’s entirely possible to have a relatively broad cross-section of the population fall within a narrow range of values for that metric.

Or put another way: just because the overwhelming majority of people who drive down a road remain on that road, doesn’t mean the road’s not narrow.

Peter A. October 22, 2025 10:26 AM

Is automatic facial recognition so essential in some areas of life or technology that this problem of incomplete coverage is serious? I am not trying to make those people feel ignored, but I have always assumed FR is just a fancy gimmick that nobody really uses for serious things, or at least it is used as an optional convenience feature. At least I am not using it at all and I haven’t experienced any pressure to use it. Does someone REQUIRE to use it nowadays, without other alternatives? I’m just curious.

Clive Robinson October 22, 2025 11:22 AM

@ Bruce, ALL,

With regards,

“It’s easy to blame the tech, but the real issue are the engineers who only considered a narrow spectrum of potential faces. “

And that is being somewhat disingenuous.

The “engineers” work to a specification toward what is sometimes called a “Factory Acceptance Test”(FAT) to get “Sign off”.

So you need to move up the stack at least one step, to those who produced the “specification” and “FAT”.

But I can point out the same reasoning that they were

“Just working to the Customer Spec, under Management Direction.”

Which again moves up the stack.

The real problem is that the technology just does not work because of assumptions. As it’s a “signal to noise ratio” issue, where the actual signal is way way below the noise in the channel, thus you need to filter in ways that are at best extremely difficult.

Which is the point in the discussion where I begin to start sounding like a racist nut-bar, whilst I’m not.

Humans know that they are looking at a “face” primarily not by “what” facial features they see, but “where” they see it with regards to the broader physical shape of a human, that is based on the structure of the skeleton (and where the problems for bio-metrics actually start).

You can test this yourself by looking at people far enough away such that facial features really don’t get seen at the 1 in 2000 resolving ability of the “average” human eye (it’s known to be better in Australian Aboriginals).

So if we assume the features such as nose, eyes, lips etc are about 1cm or 10mm we should be incapable of recognising a person at 20,000mm or in US Imperial Units ~65ft. Which is about the distance from the back door to the bottom of the yard in traditional urban housing[1].

But actually we do recognize people even strangers which begs the question “How?”

In short it’s due to the skeletal structure how it moves and the gross feature shapes that easily show through even “street wear” clothing. But if we assume a little over 1200mm / 4ft visible that means we think it’s a person moving out to ~10,000ft or a between 1 and 2 miles, or “down the road” or “across the valley”.

Thus humans mostly don’t need to “recognize faces” to recognize people and what the are (it’s why the military basic training on camouflage for warfare is about hiding the body shape and characteristic movements).

But to recognise “individuals” biometrics has gone with “facial features”…

The reason is supposedly for precision that is whilst the size and shape of body parts sort of works the error rates only distinguish sort of reliably in very small groups like a family. Because the shape is defined by the structural bones with an overlaying layer of muscle, with a further overlaying layer of fat then skin and hair.

To see why this is important to understand put a surgical rubber glove on your hand that is too small and most of the surface features become smoothed away, then put on one a little too large, the same thing happens. Blow air into the glove so it gets slightly inflated and it’s not just surface features that get changed to the point they are too far off, it’s width and length in only semi-predictable ways.

The “solution” in bio-metrics with larger groups has in effect always been to “cheat with statistics” (because things just don’t scale up). That is it’s based on making lots of measurements and looking at the ratios and the ranges they fall into.

Thus it’s a scaling or “relative measurement” system at best. BUT the ratios are further not very precise due to amongst other issues that the fat layers suffer from fluid retention that changes during the day (people look tired when their features are not sharp).

We mostly don’t realise this untill we get older and various forms of oedema in the legs, indicates we have underlying health issues such as heart or kidney failure, high blood pressure and other symptoms of our impending demise… But our brains filter these changes out when it comes to recognising people.

Bio metric systems however don’t filter it out unless it’s built in in some way, and it’s usually not because of the CPU cycles required. The tricks used are to “find the bones” by making assumptions about the overlying muscles, fat, and skin (which suffers from the same issue as forensics “it ain’t science” because it’s going backwards from effect to assumed cause).

Then this “assumption of bone shape” is used as a pointer to a subset of the database of user profiles. Get it wrong and you get the wrong subset, you get the wrong person or no one depending on how much you relax the ratios…

And so on.

The thing is your genetic information defines your skeletal structure. This information comes from two sources “genetics” and “epigenetics” which you mostly inherent from your parents and the effects of the environment.

Thus groups of people in isolation develop characteristic differences based on evolutionary advantage.

It’s why by examination of just the physical characteristics of the bones we can tell quite a bit about what the person “may” have looked like.

But… Bone shape indicates “race” which in turn indicates muscle and the other layers that go over them.

If you interpret the bone shape incorrectly, you will get the layers wrong and you will end up with the wrong surface reconstruction. Or in the case of bio metric systems an incorrect pointer into the database search.

We’ve been aware of this issue long before the actual bio-metrics we use today. Less than two centuries ago people made assumptions based on racial characteristics that unfortunately still persist and it’s always a highly emotive and politicised issue.

But the fact is large group bio-metric systems by the nature of the way they work “discriminate” not by individual identity but race.

This subject is a very deep warren beneath the rabbit hole and it’s a place most do not want to go.

But as with all large group bio-metric systems they fail because of statistical failures.

Trying to improve a system usually involves “tunning it” in kind of the same way we do with “Current AI LLM and ML Systems” but on a more directed thus selective set of parameters.

If you look at the failures of LLM and ML systems you will find very very similar in large group bio-metric systems but in a less obvious way (it’s existed in fingerprint systems for as long as they have existed a fact that they try to keep hidden).

So yes such bio-metric systems are going to fail “because of the statistical fails” and we do not know how to solve these issues currently and we may never do so.

Which begs the question,

“Why do we build such failure prone systems, that we know are going to fail?”

The answer is three fold,

1, Idiots with large amounts of money, at their disposal have political mantras and need ways of implementing them at arms length.

2, Where there is money a market will form, it’s basic demand and supply at work.

3, Where data is collected it always becomes used for way more than originally claimed it would be.

Thus it’s a circular process to,

“Rob the tax payers and divert it into select pockets.”

But also as an enabler to a “Surveillance State” and all the societal harms that gives rise to.

So if you want to put the blame anywhere then put it on the constant supply of “political mantra idiots”.

It’s something that has been one of the defining characteristics in “The War on Terror” and the resulting “Security Theatre” that has diverted trillion into “favoured pockets” at the very great expense of general society. Not just directly but also due to “lost opportunity costs” and the associated “harms” that arise which is causing many of the issues we increasingly see in the News.

[1] US “lots” were once 100ft by 100ft so that a years supply of food could be grown/raised for a family in an ~25×25 house. Due to other issues such as utilities, though the square footage of a lot remained around 10,000, the lot shape became increasingly elongated to squeeze down the “frontage” along which the utility supply ran, thus getting more homes per unit length of supply infrastructure.

KC October 22, 2025 11:29 AM

Face Equality International (FEI) had a session about facial recognition tech at their 2025 annual forum.

https://faceequalityinternational.org/fei-forum/

They look like a great group, with some stellar involvement. It’s an event I wish I could attend – though I don’t think the session is posted anywhere now? From the forum page:

AI and facial recognition software is everywhere. From opening up your phone, to opening up a bank account, all seems to be reliant on your face being recognised. But we know this doesn’t always work for someone with a facial difference. Hear from the experts in AI and discuss as a community how we can protect ourselves and work together to ensure the technology works for us.

A few more posts on the forum.

Clive Robinson October 22, 2025 12:02 PM

@ Andrew Stur,

With regards your,

98.5% is hardly “a narrow spectrum” (8 billion vs 100 million)

It’s an entirely incorrect assumption.

Wired says “more than” which makes 100 million well south of the actual figure, as the majority of the worlds population fall in the “unknown” for people with significant facial differences[1].

Worse 8billion is just a statistical assumption that is used like the word “big” as being greater than “large”.

I’m a person with a couple of facial injuries that I’ve mentioned before. They have left a couple scars, one that is “in the bone” and one that is “in the flesh”. Due to where they are on the edge of the lower jaw they move with respect to each other and visibly change shape as I talk. People staring is not pleasant to the point of embarrassment.

Since I nolonger have to be “clean shaven” I grew a beard to cover it up.

Lets just say I’m reasonably certain I’m in that “unknown group”.

Worse I’ve actually been told to my face, my beard means I’m hiding something in a quite sinister way by a presenter in front of a room full of people. When I said why I had the beard the person almost shrieked with the glee in her voice that I was wrong and she was right…

I became curt and it went rapidly down hill.

[1] This large unknown group happens because they don’t have access to Health Care where those sorts of things are “logged”. And even where they are logged, they are not put into available records.

Bear October 22, 2025 12:53 PM

I may have a dark imagination but the first thing I thought of was “PLEASE don’t give crooks a reason to remove people’s faces.”

IIRC we’ve already seen dead detached fingers used in attempts to access fingerprint devices, and I’ve seen advertisements for fake fingers that you’re supposed to be able to log into your devices with.

And we’ve already seen acid attacks where disfiguring someone’s face and making them ugly is the whole point.

While attempting to use someone else’s actual detached face to access something doesn’t seem likely, there is a plausible denial-of-service attack that involves damaging their face.

DDNSA October 22, 2025 12:54 PM

There are a few optical illusions that, if painted on one’s face, could send any Face Recognition System into a loop until kingdom come, but one would first have to analyze the “guts” (code) in the software to know how to do it. Keep in mind that most programming still uses action-reaction (conditional statements IF THEN/FI) so if there is no “IF” – THEN what happens? Hacking ANY system is a piece of cake if you know how to hit it where it does not even register that it’s been hit. If you are trying to hack something designed by an individual who’s intellectually inferior to you, then you most likely will succeed.

lurker October 22, 2025 2:16 PM

This sounds like a First World problem. Most of the others would never get within squinting distance of a facial recognition system. Interesting too that it was Chinese phone camera makers who came up with the way to render dark skin tones acceptably.

Wired is hard to read on lynx. The war between their paywall, javascript, their unrelated media torrent[1] and my limited bandwidth makes Wired unreadable on my usual browser. Searching phrases in the para quoted by @Bruce gives many partly related articles, including one from The Atlantic[2] which concludes

The truth is there are very few ways to verify someone’s identity online well, and no ways to do it both effectively and anonymously.

[1] One of their inline streams (not even a closable popup) had the ironic caption “Unwanted devices and apps spying on you in your home”

[2]https://www.theatlantic.com/technology/archive/2025/08/facial-recognition-sham/683831/

BCS October 22, 2025 3:11 PM

I simply won’t use biometric authentication.

I’ve yet to run into a system that didn’t have another option, but I’m guessing I’ll eventually run into something where the “other option” is take my business somewhere else. I hope market pressures steps in before “something else” ends up being “so without”.

Side note: the reason I refuse to use them is because a) I can’t “reset” my face like I could a password and b) I don’t trust them. I know of one system that was supposed to be a high security physical access control system that would unlock for anything that looked like a face, including things that didn’t look anything like any of the faces that were supposed to be given access (e.g. a stuffed animal).

Clive Robinson October 22, 2025 5:09 PM

@ Bear,

With regards,

‘I may have a dark imagination but the first thing I thought of was “PLEASE don’t give crooks a reason to remove people’s faces.”’

It was near the top of my long list of thoughts about “the harms” of Biometric systems I refered to above.

However at the top of my list was,

“Law enforcement and worse Guard Labour taking away peoples rights of silence and not to incriminate themselves.

We already see ICE and other Federal employees making demands of people to unlock their devices on demand. With Judges right upto and including the Supreme Court claiming what is in effect,

“If you have nothing to hide…, then you must be guilty by refusing.”

Biometrics allows Guard Labour access by simple brut force, which they have already done on several occasions.

US Law that originated from English law was based on two basic notions that went back a thousand years or so,

1, The presumption of innocence.
2, That prosecution had a burden to meet of “Beyond Reasonable doubt”.

For the simple reason “the state” has an unfair advantage over the innocent and all to often “abused their position” with “might is right” behaviours and “show trials”.

If people read the Fourth Amendment, and study a little “real not faux” history[1]. Then they would quickly realise that an overriding consideration of not just the Founders of the US but most others that practiced law there at the time (yup English lawyers) and the English that had made that part of America their home, was to rid themselves of the English Crown and it’s agents and their “Might is right behaviours” and faux prosecutions.

[1] I know it shocks many Americans when I tell them they are lied to outrageously in school in history and even science, and how much of what they were taught was not just wrong but deliberately so. Primarily for “propaganda reasons”, it often causes “cognitive dissonance” in them to be clearly visible. Some will view the evidence rationally and agree, others will continue believing what is false despite the evidence, and some will not view or hear the evidence and behave in ways that indicate they might even respond completely irrationally if not violently if you try to get them to do so.

Two you can fairly easily check for yourself,

1, Washington was not the first president (John hancock and eight others were before him).
2, Washington and all presidents that followed have never actually been democratically elected by the people of the US.

Several men held the position prior to Washington. Look up, Samuel Huntington, John Hanson, Elias Boudinot, Thomas Mifflin, Richard Henry Lee, John Hancock, Nathaniel Gorham, Arthur St. Clair, and Cyrus Griffin.

Check the way the voting process works, it was quite deliberately designed,

“To look democratic without it being democratic at all”.

Now having looked those up, wonder what other falsehoods you were taught at school…

Anonymous October 22, 2025 9:54 PM

@Peter A., Facial recognicion is enough of a “gimmick” that governments’ border police and immigration systems are using it to speed up passport control. Try opening a bank account or other highly sensitive accounts, programs, or visas — they may ask for you to scan your face alongside you holding your ID, etc.

As exemplified in the linked article, “Every time [Department of Motor Vehicles] staff tried to take her [Drivers License] photo, Gardiner says, the system would reject it.” Therefore it is no longer a cool feature to sell to companies but a eber-more occurrence in civil life.

As pointed out in this Schneir post, one issue is the technology being engineered with such ignorance in our humanity and variance.

Biometrics October 23, 2025 12:50 AM

Is it a matter of balancing incidence of false positives and false negatives?

As the criteria of what a ‘face’ is becomes broadened, won’t false positives increase; such as my cat jumping on the keyboard logs into my banking?

The engineers are to blame, as Bruce noted, but not totally blame. The physical tech (camera(s) and software) has limitations.

Bruce has noted a danger of biometric data is once leaked, there’s no way to replace it, passwords lack that danger.

Another danger is car door unlocking feature of expensive car using thumbprint or fingerprint. Carjackers amputate thumb, finger or entire hand in a moment with a very sharp knife.

https://www.theregister.com/2005/04/04/fingerprint_merc_chop/

https://www.upi.com/Odd_News/2005/03/31/Finger-lopped-off-to-steal-high-tech-car/38011112277412/

Clive Robinson October 23, 2025 4:34 AM

@ Biometrics, ALL,

You ask,

“As the criteria of what a ‘face’ is becomes broadened, won’t false positives increase; such as my cat jumping on the keyboard logs into my banking?”

Your “cat & keyboard” is an example of simple probability, like the old “infinite number of monkeys typing Shakespeare”.

We know that “it is actually possible” as the Monkey or Cat can be replaced by a human who “knows what to type”.

Not so the Facial Recognition systems being described in the article, as it is not a matter of “simple probability”.

That is “the face” is not being accepted by the system because it is,

“Not in the system database”

So if it’s not in the database, logically,

“It can not be matched”

Further and worse for effected people, it is probably safe to assume that if the persons face can not be scanned in, what ever algorithm is stopping it being “scanned in” would also,

“Stop it being “scanned for checking”.

That is a face that might eventually be “scanned in” under “ideal conditions” is and will remain marginal or continuously fail to be recognised when “scanned for checking” in real world “non ideal conditions”.

But the reality, of the way the actual “Machine Learning”(ML) behind these supposedly advanced systems works has all the issues that ML for “Current AI LLM and ML Systems” have.

Which means three things,

1, They will hallucinate.
2, A new input will effect some or all of the previous inputs.
3, They will be subject to prompt poisoning / manipulation.

Because They are actually “DSP adaptive filters” with near “infinite Q” issues like the striking of a gong[1]. Activate them in the right way –point 3– and they will “ring” almost indefinitely –points 2&3– causing an issue that changes continuously. Which means the usual methods of “testing” will fail.

Thus the Type 1 –False Positive– and Type 2 errors will be subject to constant change in “significance level” –denoted as alpha–[2] and once triggered may never stop (think in terms of the “Halting Problem”).

The worst case security wise is it “matches all faces” regardless of if they are in the database or not.

Think of it as “propping open the fire door around the back” so anyone can walk in.

To avoid this those designing the system will “err on the side of caution” which is to “reject faces”.

Think of this as having an “old lock” in the door where you have to jiggle the key –ie face– around untill the lock “finally works”.

[1] A “gong” has a couple of relevant meanings,

1.1, Firstly it a musical instrument like a bell that is a resonator that has very significant Q. Energy from any kind of mechanical action will cause it to ring at a specific frequency and the resonance to persist for a long time. Worse repeated mechanical actions will cause the energy to build up to the point it can be destructive (think shattering a glass by singing).

1.2, Secondly it’s a slang term for for a toilet with a septic storage, that does not have a U-Bend or similar trap. Which means it can and often does stink, as well as occasionally being very bad for your health and well being. Which is “a strange coincidence” as it is quite appropriate for the “Current AI LLM and ML Systems” used.

[2] Error types have confusing “domain specific language” around them that can appear to make no sense. It’s also one of those “has to be precisely worded” issues. So for those reading along that “promptly forgot after the exam” and need to have a refresh,
https://www.simplypsychology.org/type_i_and_type_ii_errors.html

Chris Vail October 23, 2025 6:23 PM

Are these facial recognition systems supposed to be as good as the average human? Because I have encountered false positives from humans ever since I was able to grow a beard. In one case, a middle aged couple thought I was their son, at a distance of a few feet. It was only when they realized that I didn’t recognize them that they discovered their mistake.

Cognitive psychologists say that everything we perceive is an artifact of our brains, so it makes sense that when the brain detects a partial pattern match, it fills in the expected details whether or not they are present. Thus I suspect that if that couple had seen me standing next to their son they would not have been confused about who is who. And the reason they misidentified me was that “their son” is salient to them; they are primed to recognize him.

I can see two categories of problems with facial recognition systems: not recognizing the right person, and recognizing the wrong person. It seems the first category is common, perhaps because the facial recognition systems aren’t advanced enough to fail in the second category.

Clive Robinson October 23, 2025 8:49 PM

@ Chris Vail,

With regards,

“I can see two categories of problems with facial recognition systems: not recognizing the right person, and recognizing the wrong person. It seems the first category is common, perhaps because the facial recognition systems aren’t advanced enough to fail in the second category.”

You need to think a little further on the issues that arise from,

1, Not recognizing
2, Recognizing the wrong person

Think of the face recognition being used for “access control”,

The first error is immediately noticed by the person as “entry is denied”.

The second error may never be noticed by the person as “entry is authorised”

Thus the perception of the system as being a failure only arises as people are “harmed”.

It’s one of the quirks of practical “physical security” systems that mostly nobody particularly cares about the second type of error.

Because in most cases –probably well over 9 out of 10– people that try to go through the door are authorised to go through anyway.

Further even if they are not authorised, but do go through, they don’t do anything that effects the “practical” security.

The reason is the “polite person problem”. If I approach the door as you are going through it, do you,

A, Slam it in my face
B, Politely hold it for me

I can make option B more likely to happen if I have a “company lanyard around my neck” and am carrying an awkward looking box. Likewise if I know you are going to go through the door I engage you in conversation whilst you are several paces away from it and let you lead the way.

There are other “social engineering tricks” that work as well. So many in fact that physical security designers assume they are going to be used, thus design the security such that either the “polite person” problem does not matter, or it can not happen.

Usually the latter is done with “turnstiles” or equivalent that only allow one person through at a time. The other way is to use the door to “thin the herd” that is it’s a communal door on a corridor, and the required level of security on either side of the door is the same, because further security measures like individual office doors will be used for security further down the corridor.

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.