A.I. thinks I’ve only got one ear

I will explain the title for this post at the end (I think that’s called click bait) but first things first. Kudos to Netflix, they have now made three impactful documentaries exposing the dangers of AI driven manipulation of data for society and our civil liberties. These malgorithms are what Cathy O’Neil, who features heavily in one of the films, calls ‘Weapons of Math Destruction’. First came ‘The Great Hack’ in 2019 which exposed the deeply disturbing scandal of Cambridge Analytica and their manipulation of voter behaviour using data facebook had provided resulting in Mark Zuckerberg having to appear in front of Congressional hearings. The following year two more films were debuted by Netflix, first ‘The Social Dilemma’ then ‘Coded Bias’.

I wrote an eBook about my reaction to ‘The Social Dilemma’. TSD focused on how social media was being driven by venal amoral algorithms designed to maximize advertising revenues. These algorithms learn that the best way to do this is to feed people, to hook them like addicts, on content that pandered to their prurience, prejudices and psychoses. The result, the unintended consequence, is an increase mental illness especially among the young, bias confirmation and, most concerningly for liberal democracies, polarization of opinion to the point where rational debate is all but extinguished. So I chose to write my eBook as a contribution to a more rational – Socratic – discussion based on some small scale research I conducted among opinion leaders and on the basis of this I attempted to offer possible solutions. I’ll come back to those.

The third Netflix documentary of 2020, following closely on the heels of TSD, was ‘Coded Bias’ directed by Shalini Kantayya and featuring Joy Buolamwini among many other experts and activists, mostly women from diverse backgrounds. This was entirely appropriate since Joy’s work which she carried out at MIT exposed how facial recognition surveillance powered by AI was reinforcing racial and gender bias. The efforts of Joy Buolamwini, Cathy O’Neil and other prominent activists like Silkie Carlo founder of ‘Big Brother Watch’ in the UK have had some notable successes in forcing governments and law enforcement agencies to curtail the use of facial recognition surveillance. However, there remains widespread commercial use of AI that affects peoples chances of gaining employment, housing, credit, insurance, healthcare based on algorithms that are unregulated and flawed, in particular AI that been shown to be negatively biased against the poor, the racial minorities and the unconventional. AI is therefore reinforcing social inequality, preventing social mobility and restricting individual self-expression. This is just as terrifying as the manipulation of social media to change not just what we think but the way we think, our most fundamental human right, and the manipulation of elections, an attack on the very foundation of democracy.

All of this has been exposed in three documentaries produced by Netflix. Amazon and Apple both make lots of documentaries but none so far on the dangers of big data and AI. One wonders why……… but as I say, kudos to Netflix. I guess in the case of Netflix they use algorithms only to commission new content for you, and to suggest available content to you, that they think you might like, more like weapons of Individual entertainment than mass destruction.

I said I would return to potential solutions to this AI challenge and we need solutions because we do want, we desperately need, the positive use of AI to help us take on the Herculean tasks of tackling climate change, food poverty, obtaining better health opportunities for all.  As an atheist I don’t believe we were created by God but many of those who do also believe we were created in his/her/their likeness. They explain away humanity’s capacity to do as much evil as good as God giving us free will. Perhaps God did create us to be just like him/her/them and perhaps having given us free will he/she/they did not fully understand the ramifications of that until it became too late to do anything about it. This seems to be the perfect metaphor for AI. We created it and we gave it lots of data about us so it could think like us, maybe be better than us, certainly a lot faster than us. AI can only learn from big data (which remember means not just lots of it but multi-source). The biases that ‘Coded Bias’ talks about happened because the data we gave the AI to learn from was skewed to, let’s call it, ‘white privilege’. So we created AI to be like us, but only some of us, and we allowed it to develop in ways that were both good and bad for the world, just like us, and it is in danger of getting out of control, just like us. So how do we do better than God? How do we get AI back under control and how do we direct it towards things that are good for a free and open society, a world of equal opportunity for all irrespective of class, ethnicity, sexuality, gender, faith (personally I’m not so sure about the last of those given the religious extremists out there but maybe with AI we can sort them out too)?

China is on a very different agenda it must be said. They are 100% explicit that they do not agree with democracy and that they want to use AI and data to control their society. There is no secret to what China are doing with data and facial recognition, we saw this in Hong Kong in response to the people who dared to challenge the state. In China you get a Social Credit Score, like a financial credit score but all encompassing. If you do the wrong thing, if you say the wrong thing, even if people you know do or say something wrong you are punished and the state, the CCP will know exactly what you are doing and saying, where you go and with whom you are consorting because they have all your data. The state can control you by controlling your Social Credit Score and thereby restricting your ability to get housing, access to public transport & travel, healthcare, financial services, you name it.

That makes them terrible, right? China is much worse than the free Western democracies – but is it? Of the 9 major organizations developing big data AI, 3 are in China and 6 are in the USA. Exactly the same thing is happening in America as in China with two important differences a) you don’t know about it, it’s invisible and b) the power lies in the hands of these few huge commercial enterprises who care first and foremost about profit and shareholders. People are denied jobs, financial services, housing, information & content is pushed at us with bias and partiality, all because without us knowing we are being watched, measured and judged by AI algorithms that not even the people that created them fully understand. Governments have used AI and data in ways that undermine civil liberties but they are being called out, they are accountable, although there remains an understandable concern that an extreme left or right wing government might not be so shy in abusing the power of AI & data. As they say, just because you are paranoid it doesn’t mean they’re not out to get you.

So, solutions. I’ll start with the two proposals I’ve made previously because I still believe they are 100% right and both doable.

Firstly, social media needs to be regulated and forced to move to a subscription model. Social media generates a huge amount of data due its pervasiveness and frequency of use. AI learns from data and Social Media is where it does most of its homework. These are powerful platforms and they should require licenses that can be revoked in the case of malfeasance, just like newspapers and TV were. If the business model is subscription based they can still be very large businesses but most importantly the algorithms would be trained to build customer loyalty not eyeball addiction. If you pay something every month to use facebook, even just $1 then you are a customer not data fodder.

Secondly, there should be government investment together with commercial incentives to develop platforms that allow people to own, control and, when they chose to, transact their own data. Data is the new oil but it has been allowed to fall into the hands of robber barons. It is your data, you should be able to harvest it, store it and use it however benefits you most. This is not a quick fix and will require secure technology infrastructure with the scale and complexity we see today in financial markets and services. In my view it could be an opportunity for the financial sector who have the resources and customer base to make this work. Even if you don’t like your bank you have to trust them because they manage your most sensitive information already. A bank could be trusted to store your personal data, allow it to be transacted on your terms to get you a return and to manage those transactions. I don’t understand why banks don’t look at data in the same way they used to look at cash – bring it to us, we’ll keep it safe and give you access to it when you want and if you’ll allow us we will lend it out to be people (encrypted to preserve privacy) and make it work for you. Instead of going to facebook, or any of the data trawlers, scrapers and scavengers, big brands would go to the banks and buy the profiles they are looking for to promote whatever they want. People would consent to see brand content, anonymously, if it was made worth their time or interest.

Put these two things together – social media on subscription and the mechanism to leverage one’s own data – and you have solved a big part of the problem with no need for regulation.

That said there is still a role for regulation to prevent data abuse at the hands of AI and hold miscreants accountable but it has to be co-ordinated internationally and that sems like quite the challenge in a world were there seems to be growing nationalism and weakening global alliances. That was my conclusion but something in ‘Coded Bias’ gave me some optimism. The point was made that algorithms need an equivalent to the FDA, the US federal agency for Food and Drug Administration. We don’t allow people to market pharmaceuticals or foods that have not been tested or lack the appropriate quality controls. And this does, more or less, work across international borders. So why can’t there be an IAA, International Algorithm Administration, backed by international law that enforces the responsible development of AI?

Finally, I want to address the issue of whether big tech companies are actually able to behave responsibly – they say they want to but always use the defense that the scale of their operation, the sheer number of users and data points, make it impossible to have foresight on all unintended consequences and oversight on every malpractice. Let’s focus on the issue raised in ‘Coded Bias’, that facial recognition technology is biased against certain social groups, generally the disadvantaged groups who are under-represented in the data the AI is learning from. In my research I came across something new to me (I never claimed to be a technology expert). It is called synthetic data and is predicted to become a huge industry. The models and processing needed to develop synthetic data are no doubt very complex but the output is very simple to explain, the clue is in the name. This is artificial data, data that’s confected, invented, made up. It is needed to fill gaps in real authentic data to help AI to learn to do whatever it is developed to do. For AI to be effective it needs lots of data and the data has to be comprehensive and statistically representative. So they run lots of simulations based on lots of different scenarios in order to produce data to plug the gaps in real data.

This is a terrifying concept but it is not conceptual, it is happening right now. Many if not most of the systems developed using machine learning and AI use synthetic data, it overcomes issues of sensitive and confidential data that is hard to get. Obviously it is open to abuse, you can create the data to feed to AI that teaches it to discriminate prejudicially. So per the previous point, there has to be regulation. However, it can also be used to eliminate bias.

As humans we are programmed to be biased, our brains work by using pattern recognition. We know not all snakes are dangerous but some are, so if it looks like a snake we run. It’s a basic survival instinct and instincts are very hard to shift. When we look at an individual we take in the visual cues and form judgements and, just like the malgorithms, our brains have been trained to make prejudicial assumptions on flawed information. Someone looks a particular way, talks a particular way, exhibits certain behaviours and we make a negative judgement, there is no point in pretending otherwise. That judgement can be unfair but as humans we have the ability to over-ride our unconscious bias and make a conscious decision to look deeper, to give someone a chance, before making a decision that affects them. Synthetic data allows us to programme that humanity into AI. Poor people are a bad credit risk, the real data will teach AI this lesson and make it hard for certain social groups to access the loans that might help lift them out of poverty. The same system will make it very easy for the well off to buy a second car. One thinks it would be better for society to make finance available to facilitate social mobility rather than more physical mobility for the well off. If so we can use synthetic data to upweight the scenarios in which poor people are not unfairly treated as bad credit risks.

‘Coded Bias’ certainly got me thinking, so well done Netflix, again. My brain works in strange ways and the focus on racial bias in facial recognition made me think about ears. A lot of images of people will be side on as they walk past the camera that’s recording them, so it will only detect one ear. The AI might conclude that lots of people, even most people in certain locations, might only have one ear. Having only one ear has a medical term, it’s called microtia and it is more common than I thought when I looked it up. It occurs in 1-5 out of every 10,000 births which I think means there are 4 million out of the global population of 8 billion that only have one ear. Not common then, but not unheard of in the real world. We could teach AI about this, using synthetic data because samples of real world data would not likely detect the prevalence of microtia. It might prevent AI drawing the wrong conclusions, either ignoring microtia or over-estimating it. On the other hand, it might help facial recognition spot a one eared crook like Mark ‘Chopper’ Reid, the Australian criminal who cut off his own ear in prison to get an early release (it’s a long story). My question is very simple – would a machine have even thought about this, would it have looked up the data on microtia, searched online for an example of a one eared crook? I doubt it. So, if you have them, listen with both ears and both eyes wide open, we need to use AI, not let AI use us.

Jordan Peterson and the Ancient Greeks

Acclaimed, and controversial, academic Jordan Peterson has founded his own university, Ralston, with two campuses, one in Savannah, Georgia USA, and the other on Samos, a Greek island, home to the Goddess Hera, the philosopher Epicurus, the astronomer Aristarchus and the great mathematician Pythagoras. I suspect JP might also have a holiday home there.

The ambition behind this new humanities-focused university is the creation of a new unifying ethos (that is to say ‘character’ based on a coherent set of values) achieved by reconnecting with the Ancient Greek philosophers. Students are required to learn Classic Greek in order to study directly from the original texts of Aristotle, Marcus Aurelius et al.

I really like the sound of this. Indeed, if I was 40 years younger I’d be applying to enroll. There are sadly two problems with this wishful thinking. Firstly, 4O years ago I would have had no interest in philosophy, ancient Greek or otherwise, and having given up even Latin as soon as I could in my not-so-classical education the idea of having to learn Greek would too big a hill to climb. My interest in philosophy, Greek philosophy in particular, has grown slowly over the years as career gave way to time on my hands. If only we could live our lives backwards, starting with a lot more wisdom and some financial freedom and growing towards the youthful energy to make best use of both. Secondly, admission to Ralston is very limited and very meritocratic. I am not smart enough to get in I fear.

So I will just commend the opportunity to attend what sounds like a really interesting new, yet in terms of ethos, ancient University to those who are both younger and smarter than me.

I do admire Jordon Peterson and I am sure the personal one-on-one sessions students get with him will be a major draw card. Like the ancient Greek philosophers, he is trying to make sense of the world, to debate and  think his way through to a better vision for humanity. In the process he challenges intellectually weak thinking at either end of the political spectrum although it is the left who he seems most to antagonise. He walks head up, back straight (something he commends in ‘12 Rules for Life’) into the trans and feminist debates armed with the most irritating of weapons, well-researched facts and well-structured argument. Misogyny is only one of over 20 factors that explain the pay gap between men and women and by far the least important. There does appear to be a pattern of questioning gender when civilisations collapse as was the case with Greeks and Romans. Transgender and the right to self-identify is more of a decadent (my word) social contagion than a justified assertion of human rights by a significant and repressed minority (as was the case for homosexuals). He may or may not be right about the latter but he is entitled to air his opinions and his opinions are always worth listening to.

Yes, I’d love to be a student of his if I could go back in time but not so much Socrates. Challenging philosophical thinking might attract some very negative press and social media these days, as JP has discovered, but back in the day it got Socrates and some of his followers killed.

One-eyed Dan – Doing more with less

The D-Marketing eBook is now live and can be downloaded here in ebooks. I warn you, it’s over 90 pages long and important as the topic is – saving the planet by reducing waste and wasted marketing – it’s a chunky read with my usual clunky writing style. There is an ‘In a nutshell’ section and in fact all the sections can be accessed directly from the index page so the reader can skip any bits they want and get straight to what takes their fancy. But still, 92 pages, it’s a lot especially since I have always been of the view that far more business books are bought (or downloaded) than are ever read.

This was praying on my mind so I thought about the ‘business books’ that have been widely read, books like ‘The One-minute manager’ or ‘Who moved my cheese’. They are short, light with a bit of a story that makes you think. Then I thought about the most read books in the world – children’s books. I lost count of the number times I read the marvellous ‘Cops and Robbers’ by Janet and Allen Ahlberg to my kids (and lately to my grandson). Honestly I still know most of it off by heart.

Bingo I thought, I’ll write a short story that comes across as a kid’s book but which is in fact the summary of the arguments for D-Marketing. And so I did – you can find it in “Books” – just click the “Buy on amazon” and you can download it for free.

It’s the story of One-eyed Dan who saw more with less (get it?) and it breezes along in just over 20- pages including some illustrations. Enjoy, you’re welcome. Go change the world.

Consumer, Data Cow or Person of Interest?

posted in: Life, Technology | 0

Along with many others I rail against the persistent use of the word ‘consumer’ by business, the media and even government. We are not, none of us, merely consumers, we are sentient, we are people. Consuming things is a by-product of our existence, not our defining characteristic or our purpose. Despite the confected faux regard for ‘consumer power’ or ‘consumer rights’ there is something inherently disrespectful and patronizing about referring to people as ‘consumers’. We are, on some occasions, customers, everyone is in some ways, on some days, someone’s customer. That’s fine, it conveys the idea of a willing transaction, an adult-to-adult relationship based on mutual interest. There is no context, in my view, where the word ‘consumer’ could not be more respectfully replaced by ‘customer’ and many where ‘person’ or ‘people’ would work just as well. Labelling people as ‘consumers’ implies that our only usefulness to the state is as units of labour to produce and units of consumption to justify ever more production. We’re just like the human batteries in ‘The Matrix’, Winston in Orwell’s 1984, put on earth to support the system, ‘Big Brother’.

Consumers? You might as well call us ‘eaters’, ‘breathers’ or maybe  ‘hungry, needy oxygen-users’.

If we buy into this idea of ourselves, even if only in part, as ‘consumers’ we are also giving license to a system that encourages us to consume more and more and more. Creating demand for ever improving products and services is the bedrock of liberal capitalism, a Western system that has done far more good than harm and as the old line goes, is better than the alternatives. Creating excessive consumption is bad. People know the difference, people have come up with the idea of a more circular economic system to limit waste pollution and over-depletion of finite resources. Consumers consume, just like gamblers gamble. People know when to stop.

So can we please confine the term ‘consumer’ to Room 101? Let’s just incinerate it. We are customers and/or we are people. Now we can turn to the far more dangerous threat to our humanity. Our real purpose is to provide data. We are fast becoming ‘Data-Cows’. Justin E.H. Smith is Professor of Philosophy at the University of Paris. In a recent article for The New Statesman he rather depressingly concluded that one’s job is irrelevant, whether professor or production worker, our role is to generate data.

………… it is growing ever clearer that the true job of all of us, now, is to be milked for data by the provisioners of online content.

What we do now, mostly, is update our passwords, guess at security questions, click on images that look like boats to prove we’re not robots. We are trainers of AI and watchers of targeted ads. 

Justin E.H. Smith: ‘Meritocracy and the future of work’ The New Statesman, April 2021

The logical conclusion of this is that when enough data has been milked from us it will all be uploaded to machines and humanity will have served its purpose. Sounds like a Sci-Fi plot – worryingly life has a habit of imitating art. And the on-ramp to the brave new, de-humanized world has already been unveiled and not just by Zuckerberg. On this very day the technology company, Improbable, has just raised £150 million to build M2, an infrastructure for the Mata-verse bringing together work and entertainment into a virtual, fully online world where all we will do is spew out yet more data for learning machines to manipulate.

To cut a swathe through history, we evolved from a repressive feudal economy to a more open, liberal capitalist economy though the ability to earn our own money and have control over how we spent it, overthrowing the Barons in the process. The new Techno Barons (or governments as in the case of China) can return us to subservience if we do not have control over the data we generate and how it gets used and monetized. We have to fight for this, we need to support the new platforms that enable us to own and transact our own data (you can check some of them out here) and in the meantime we need to resist any and every effort to get us to share our data with people who will exploit it to their, not our, benefit. Whenever you can, don’t give your email address, don’t sign up, refuse permission, block the cookies, use VPN’s.

We need to be able to hold up our hands, when we choose to, and declare ourselves a person of interest with opinions, ideas, preferences and purchasing power unique to us. We are all interesting because despite what the data & behavioural scientists tell you, despite what the algorithms predict, we make surprising choices and act out of character. We think, we have ideas, we create.

You are not a consumer, you’re more than a data-cow, you are a person of interest. Your data has great value, own it, and use it on your own terms.

Solving the Social Dilemma

I’ve just finished my article on my response to the Netflix documentary, ‘The Social Dilemma’. I ran a small survey to help me in the writing of this and results are still coming in so there may be some further additions and editing to be done but I wanted to get this first version out there and see what people think. Please let me know.