Introduction
Today I came across a tweet by Erin LeDell, which intrigued me:
Wow, 500 Startups should be embarrassed by this. It's straight out of dystopian fiction and can only be backed up by bad science. Whoever did the due diligence on this before investing should probably lose their job.
— Erin LeDell (@ledell) November 21, 2018
This was a reply to a tweet by Delip Rao calling on 500 Startups to drop Faception from their portfolio because what they are doing is basically pseudoscience:
Hey @500startups, I am writing this short thread to draw attention to the shoddy science startup, https://t.co/EI7K16gD1G, you are incubating. Here’s a free informal due-diligence for you. https://t.co/PILyZlahxU pic.twitter.com/nZfV2ZZPom
— Delip Rao (@deliprao) November 20, 2018
Faception claims on its website that they "reveal personality from facial images at scale to revolutionize how companies, organizations and even robots understand people and dramatically improve public safety, communications, decision-making, and experiences." Essentially, they claim that their classifier algorithms can determine if a person belongs to a particular category from just looking at the person's photo. The categories they list are:
- High IQ
- Academic Researcher
- Professional Poker Player
- Bingo Player
- Brand Promoter
- White-Collar Offender
- Terrorist
- Pedophile
Thus, we are witnessing a backlash from some people at a potentially unethical (or maybe even false) investment in 500 Startups' portfolio.
Since this is about sci-tech startup investment due diligence, this seemed like a perfect opportunity to showcase our product vision in context.
NB: Keep in mind that our platform is under development and its capabilities are currently very limited. Nonetheless, we could glean some interesting insights into the Faception investment. This post is intentionally brief, but we welcome any comments below.
Research
Research with Avogadro One is pretty straightforward. Since we are researching a technology that claims to determine if a person is a criminal by looking at his/her photo, we made a couple of search queries with these keywords.
Our search brought back several highly relevant articles for this topic:
- Unbiased algorithms can still be problematic from TechCrunch, 2018-09-30.
- Biometric Mirror highlights flaws in artificial intelligence from TechXplore, 2018-07-23.
- Michal Kosinki – “Face-Reading AI can Detect Your Political and Sexual Orientation” from NullTX, 2017-09-13.
- Faception – AI Powered Facial Recognition Technology from Nanalyze, 2017-03-22.
- Neural Network Learns to Identify Criminals by Their Faces from MIT Technology Review, 2016-11-22
- The MIT Technology Review article links to a research paper: Automated Inference on Criminality Using Face Images first published on arXiv on 13 Nov 2016, last updated on 26 May 2017 in response to a media outcry at their results.
Findings
We did a very quick search that lasted no more than about half an hour, but still came back with a few valuable insights:
- This is a controversial topic, part of a broader 'Big Brother' theme and the implications for privacy from pervasive use of AI-powered image recognition by various government agencies. This topic alone raises many ethical questions.
- Many AI models have unintentional biases due to the way they are constructed and trained and thus they perpetuate and even amplify bias. This is a known issue in trying to use AI as a predictive tool for law enforcement and exacerbates the ethical concerns mentioned above.
- Attempts to infer a person's propensity to commit a crime from their facial features were first made a long time ago, so this is not really a new topic.
- At least one recent scientific paper (apparently not peer-reviewed) reports some success in profiling people by their photo, although the reported results must be interpreted carefully, i.e. not how the media interpreted it. Namely, as the researchers explain in their response to the media outcry, the base rate fallacy means that the reported 89% algorithm accuracy and 7% false positive rate imply only a ~4% probability that a person labeled as a 'criminal' by their algorithm actually is a criminal.
- Faception doesn't detail how exactly they have built their models and haven't published any papers so their approach is a mystery.
Conclusions
So what is the takeaway? (keeping in mind that this was a quick exercise and not a proper due diligence process we normally go trough when considering investments in startups)
- If you decide to invest in Faception, prepare to face criticism from both the scientific community and the privacy camp. You'll have to defend your decision given a lot of ethical concerns in the AI applications for law enforcement.
- Before investing, ask Faception about their models, statistics and especially how they control biases. Also ask them about their ethical beliefs. Maybe even ask their customers if they considered the base rate fallacy when making the purchase decision. If they only considered the accuracy, they may eventually discover that the product does more harm than good due to too many false positives.
- It seems obvious that Faception would be a controversial investment, especially after the recent backlash at the tech giants working with governments on military applications of AI technology. For some investors this might be enough of a reason to stay away from Faception regardless of the commercial potential.
- If the ethical concerns don't scare you, do more research. Even with the few results we found during our brief exercise, you can approach an expert with specific questions about Faception rather than a generic query like "what do you think about it?"
Obviously, a good due diligence exercise would dig deeper. You should check management profiles, patents, look for more research papers on this subject, talk to experts (if you have the money or the connections) and to the company (if you are in a position to ask for confidential data). Avogadro One will make many of these tasks quick and easy to perform, but you'd still have to exercise your judgement when making the final decision. After all, what's the fun of investing if an algorithm makes it for you? Especially when you know about the base rate fallacy 😉
If you are a startup investor, please answer our anonymous questionnaire, which will help us get a better understanding of your needs.
Sign up to our mailing list to find out when you can use our awesome platform in your due diligence process!