Human enhancement AI

posted in: Technology | 0

Bless me father for I have sinned. It is 6 months since my last blog post… but since I am a ‘deinfluencer’ not an influencer does it really matter? Deinfluencer you ask. Well yes. Once again I thought I’d come up something new (remember DeMarketing – of course you don’t, see point above) but it turns out deinfluencer is quite the new thing. They are people who try to persuade us all that materialism is bad – rather than promote brands they try to persuade people not to buy things. Building on my Demarketing thing I guess by that definition I am an aspiring deinfluencer but that is not what I had in mind. I meant to convey I am the opposite of an influencer – no-one reads my blogs and I do nothing to promote them so my blogs influence no-one, about anything, ever. Maybe non-influencer would be better?

So, who cares that I have not blogged for a while? Only me and only because writing helps me get things clear in my mind. I don’t write blog posts to be read, I write them for posterity and I write them to help me think. And had anyone been reading my blogs they’d know I think a lot about AI.

Any new technology has the potential to do good or cause harm, just like we humans I suppose. And just like us, new technology normally does both. Trains, Cars, Nuclear Fusion, the Internet, Social Media and now AI. Trains laid the tracks for the industrial revolution and the creation of the holiday industry, the resulting urbanization fractured communities and resulted in slums full of displaced people. Cars liberated us and, it turns out, destroy the planet. Nuclear laid waste to whole cities and has the potential to destroy the world but can also power the world albeit with toxic waste. The internet democratized knowledge and connected the world whilst enabling widespread access to pornography, gambling and drugs. Social media keeps us close to our family and friends, creates a free platform for people to share ideas and support each other. It also creates division by feeding us content that reinforces extreme views and helps our children lose their self-esteem and harm themselves.

It would be reasonable to expect AI to have a similar yin and yang of good and bad, and everyone does but will it do more harm than good or more good than harm?. In the red corner are the people who fear that AI will take over from humans in a world where, if we are not destroyed as a species – Terminator,  perhaps our only role will be as batteries – The Matrix. In the blue corner are those of us that hope AI can take away the things that we don’t enjoy, and are not best suited to, doing thus liberating us to do more of things that only humans can do or do best. AI might be able to solve the existential challenges in health, the climate, poverty, crime . It might and enable us to redefine physics and explore the universe – Star Trek, Interstellar, Lucy. The future has always been presaged in Sci-fi movies but the movies paint very different pictures of the future.

You hope for the best but there are a lot of experts who fear the worst . 35,000 of them wrote a 22 word warning about AI in March 2023, a list including names like Musk and Sam Altman who are hard to ignore. There is a ray of hope and optimism if you believe the old adage that the bookies normally get it right – the bookies in question being tech investors (and let’s face it, investing is sophisticated betting where inside knowledge, due diligence and expert advice tilt the odds in your favour). The smart money is going into Human Augmentation AI, start-ups that are using AI to help humans be better humans. There will be misuse of AI, even reckless use unless the politicians get their act together (the ray of hope just faded a lot). Unquestionably the ‘sin’ industries led by Porn will greedily exploit the potential of AI but maybe just maybe Human Augmentation will create a better world, one in which we are able to be better citizens – or one where we will be forced to be as is already the case in China depending on your definition of ‘better’.

Yep, AI is perplexing and scary – most people’s reaction is to cross their fingers and stick their head in the sand. My reaction, most often, to anything perplexing is to want to know more by getting involved. I didn’t want to invest in any more marketing tech start-ups, my record of doing so is very mixed, just about more wins than losses but nothing spectacular. But I told myself that if I could be persuaded to take an active interest in a new marketing tech start-up it would have to involve AI.

I have long believed that if you focus on an outcome it is more likely to happen. 40 years ago, prompted by a session given by a Harvard Professor on a business course, I wrote down my life goals. I thought at the time they were ambitious. Within 15 years I had achieved them all – should have dreamed bigger. I don’t believe in fate, but I do have faith in Neuro-linguistic programming. If the thought is clear in your mind you are attracted to opportunities that draw you to your goal and more immune to distractions. NLP is a way of changing thought patterns to achieve desired outcomes and a good way to change your thought patterns is to commit something to paper.

Guess what? My interest in AI, expressed in my blogs, has resulted in me becoming involved in a new business that is deploying AI to help tech businesses win clients. The formidable founder found me on Linked-in and would not take no for an answer. We are about to unveil a very exciting (we think) new approach to business development aka Sales using Human Augmentation AI. After less than a year the business is already generating profit even without next level AI and so far we have not had to raise any capital.

Marketing tech using Human Augmentation AI – or as I prefer to say, human enhancement –is hardly tackling the big societal challenges, but by making it much easier, faster and cheaper to win clients we might help the tech businesses that can. The faster they build their client pipeline the easier it is for them to raise the money to build the tech to grow their business.

Otherwise, take my advice, look for an opportunity to invest in AI Porn.

You heard it here first – of course you didn’t, see point above, I’m a non-influencer.

Can AI save democracy (and sell washing powder)?

I’ll deal with washing powder later, let’s start with Politics. Democracy, as we all know, is the Greek concept of rule by people, for the people, through voting their representatives into (and out of) power. It sounds simple, it’s anything but – democracy poses a lot of questions. Who is eligible to vote or be a representative? Is the representation proportional or geographic/constituency-based or a combination of the two? How does the government actually work, how long is their term of office? What are the rules regarding the election process? Is it compulsory to vote or can people opt out?

The troubling fact is, virtually every ‘democratic’ nation in the world has a different form of democracy, one that reflects their history, their culture and whatever suits them. In Switzerland, Australia and some 37 other countries it is compulsory to vote. In most of the world it is not, in many big countries it would be very hard to enforce. 100 countries use proportional representation (although not uniformly) and 50, like the UK, use ‘first past the post’. Many others have transferable votes (e.g. you vote for your first and second choice). Democracy is one idea with lots of different ways to implement it. Furthermore, whatever democratic system is adopted it is normally enshrined in a ‘constitution’ and is virtually impossible to change. The USA has a system that was designed after the Civil War.

And on the subject of the USA, is it even a democracy? A plutocracy is defined as ‘a system of government controlled exclusively by the wealthy, either directly or indirectly’. Does that sound familiar? You cannot win elections in America without big campaign budgets, you need name recognition. So you have to raise money from the wealthy, be wealthy yourself or both to become President. You can also win more votes than the other candidate and still lose, this happens fairly often because of the collegiate voting system which could be argued to be outdated and no longer fit for purpose – but no-one does argue this. Trying to fiddle with the constitution is a sure-fired vote loser.

It is very clear now that half the world do not believe in democracy and for us, the half that do, most of us do not really question what we are defending and yet are increasingly less confident in our politicians, ergo the ‘system’. Now AI comes along and everyone is terrified that it will subvert our democratic process by using its awesome power to generate highly convincing but fake news and then disseminate it widely, especially so since the half of the world that sees democracy and open society as the enemy will invest heavily to ensure it does exactly that.

MIT in the USA, a highly respected institution, has recently published their study on the potential impact of AI on Politics. Far be it for me to question the smartest brains but I will anyway. They look at it through the same lens that most people look at AI. They set out a 6 stage pathway to AI taking over from humans, in this case taking over government. The final milestone is the election of an AI ‘entity’ into a position of power. An earlier milestone, which they argue is pivotal, is the point at which AI generated testimony is admissible in a court of law. This does not seem so far-fetched but it would of course rely on AI being proven to be totally objective and evidence based. We’ve all seen the courtroom dramas where the ‘expert witness’, a hired gun, is exposed as neither objective nor in possession of all the facts. So why not AI?

If that is the case, and it seems more likely than not, perhaps AI could be used to improve and strengthen democracy. If you use a different lens to look at the impact of AI, the one I prefer to use because I want to be optimistic, you look for ways humans and AI work together to produce better outcomes (faster, more reliably and more creatively) than either AI or humans can do alone. What if AI could help people use their vote more effectively?

You can use AI to pose questions and get answers that draw on all available evidence – you can even use it to suggest good questions to ask. So what if there were an AI system that allowed people to make the most informed decisions possible on how to cast their vote? You can choose to ignore the answers but it could cut through all the electioneering and campaign bullshit, all the fake news, all the posturing and spin doctors. It could level the playing field so the candidate with the most money, the best advisors, the smoothest media skills loses their advantage.

Take a simple example and some simple questions. Say your main concerns were climate change and the growing gap between the rich and the poor. You could ask AI to tell you which candidate also focuses on these issues, what their voting record has been if they are seeking re-election, how hard they work, what is their relevant experience?

Well I decided to give it a go using Bard. I asked:-

I live in England. I care about climate change and the growing gap between the rich and the poor but I also want effective efficient government. Which political party should I vote for in the next general election?

Bard replied:-

The political parties in England that have policies that address climate change and the growing gap between the rich and the poor are:

  • Labour Party: The Labour Party has pledged to reach net zero emissions by 2030, create a million new green jobs, and invest heavily in public transport. They have also promised to increase taxes on the wealthy and corporations to fund social programs.
  • Green Party: The Green Party is committed to tackling climate change and social inequality. They have pledged to ban fracking, invest in renewable energy, and introduce a universal basic income.
  • Liberal Democrats: The Liberal Democrats have also pledged to reach net zero emissions by 2030. They have also promised to increase taxes on the wealthy and corporations, and invest in public services.

Bard also added:-

Ultimately, the best party for you to vote for will depend on your own priorities and beliefs. It is important to do your research and compare the policies of the different parties before making a decision.

I think google are understandably covering themselves, a lot of questions you ask Bard get this ‘but you must make up your own mind’ kind of summary. Yes, but I asked and you gave me a list, you did not say it was in no particular order, so my conclusion is that I should vote Labour. If I do not like the candidate on offer in my area for some other reason I might take the next best choice, that is the system in the UK. In the UK it might be clear who I should vote for but because we do not have proportional representation and I do not want to waste my vote I might choose the party most likely to be able to form a government. Perhaps – and I’m just putting it out there – Bard knows this and that is why Labour come top of its list? Note the part of my question where I say I value effective and efficient government.

Anyway, food for thought. Perhaps AI can empower democracy?

I’m supposed to focus on marketing not politics, so I asked Bard another question, one that takes me back to my old job as Persil Brand Manager in the 1980’s where we used every trick in the book to outsell Ariel.

I asked:-

I have a family with young kids and live in London. Which washing powder should I buy, Persil or Ariel or is there something better than either?

The first reply said both Persil and Ariel were good quality, listed some things to think about and suggested other well-known brands like Fairy or Daz as alternatives.  But I pushed it and asked it to tell me which was best overall. Bard thought for a minute and then replied.

The answer is Persil Non-Bio if you are worried about sensitive skin.

If not, Ariel.

40 years ago when I was running the Persil brand, the guy who fought tooth and nail against the ‘men in grey suits’ at the top of Unilever (they were all men but to be fair some wore blue suits) for permission to re-introduce Persil Non-Bio so people had a choice between products with or without enzymes, I would have given you exactly the same answer, just as quickly.

I say you can trust Bard to tell you what washing product to choose and I think the same might be true of politicians and political systems.

Can you read and write a book at the same time?

posted in: Uncategorized | 0

I’m reading a very interesting book by Jeff Jarvis, ‘The Gutenberg Parenthesis’. It’s a fascinating analysis of the world before print, the impact of print and the potential of the internet and AI to take us back closer to the way the world was before print.

In the world we are entering, “Everyone can be connected. One-to-many is replaced by any-to-any and any-to-many”.

Jarvis’ argument (backed up by legions of other scholars) is that print created a linear world in contrast to the more fluid, non-linear, more conversational world before Gutenberg. The tradition of the ancient Greek Philosophers was based on “conversations as a tool of friendship, learning and ultimately democracy”. In the new world the ability to converse dynamically and globally can create amazing new possibilities for the advancement of society although undoubtedly there will be some big bumps along the way.

Anyway, this got me thinking. With AI and its access to everything that is on the internet the very idea of a book can be reinvented as more of a conversation. You can read it whilst also helping to write it. So, I gave it a go using, ironically, Bard AI.

I started the story off and we went backwards and forwards until we had written a completed short story. It is not , I must acknowledge, a very good story but then it took less than 10 minutes.

This is how I started the story off:-

Ronny comes home late from work and walks into his house. There is nothing and nobody there, the house is entirely empty. Not only are his wife and kids nowhere to be seen, there is literally nothing in the house, it was as bare as the day it was built.

I’ll show the ‘to and fro’ at the end but here is the story we came up with. For a bit of fun, I asked Bard to write the completed short story in two different styles, Ian Fleming and Shakespeare.

The Spy Who Saved His Family

Ronny was a brilliant scientist who worked for a government agency. He was also a kind and gentle man, devoted to his family.

One day, Ronny came home from work to find his house empty. His wife and kids were gone, and everything had been taken. The police were unable to find any clues, and they suspected that Ronny’s wife had left him and taken the children with her.

Ronny was devastated. He didn’t know what to do. He started to investigate on his own, and eventually he learned that his family had been kidnapped by terrorists. The terrorists were demanding a ransom of $10 million, and they threatened to kill Ronny’s family if he didn’t pay it.

Ronny didn’t have $10 million, but he knew that he had to do something to save his family. He tracked down the terrorists and confronted them. He told them that he didn’t have the ransom money, but he offered them a trade. He would give them something even more valuable: information.

Ronny told the terrorists about all of the classified information that he had access to. He told them about government secrets, military plans, and everything else he knew. The terrorists were impressed. They agreed to release Ronny’s family in exchange for the information.

Ronny’s family was safe, but they were changed. They had been through a lot, and they would never be the same. Ronny was also changed. He had learned that the world was a dangerous place, and that he could never take anything for granted.

Ronny continued to work for the government, but he was more careful now. He knew that there were people who would do anything to get their hands on the information he had access to. But he was also more determined than ever to protect his family and his country.

The End

The Tragedy of Ronny

Ronny was a man of science, A brilliant mind, a gentle heart. He loved his wife and children well, And they were his life’s true start.

But one day, a tragedy did strike, His family was taken by force. The terrorists demanded a ransom, But Ronny had no such source.

He was distraught, he knew not what to do, He could not bear to lose his loved ones too. But then, an idea came to him, A plan that just might work, if he were keen.

He would give the terrorists what they wanted, But not in the way that they thought. He would give them information, The most valuable information he had.

The terrorists were impressed, They agreed to release his family in exchange. Ronny’s family was safe, But they were changed forever.

Ronny was changed too, He had learned that the world was a dangerous place. But he was also determined, To protect his family and his country, With all of his heart and soul.

The End

As I say, not that great but we are at the beginning of this new technology and new world. Contrary to what most people are predicting, Jarvis maintains that the impact of technology is much slower than we think. He sees us as being right at the beginning of the recreation (re-establishment) of a world that will take pretty much as long as it did for Gutenberg’s printing press to determine (and constrain) the world we are leaving. My perspective is this, we’ve had mobile phones since the early 1980’s. It has taken over 40 years for them to progress from an unwieldy brick-like contraption with very limited reach and functionality to the sleek smart phones we have now. In terms of AI we are right at the beginning, but think where we could be in a decade or two.

We could have fully interactive, non-linear books and films – that anyone could read/watch and write/create at the same time. Print levelled the playing field, it gave Martin Luther the opportunity to challenge the mighty Roman Church in his native German and not Latin, the language of the elite and powerful. Jarvis and many others point out that it was print’s ability to standardise language that created the foundation for nations drawn along different lines. AI can make knowledge, information and reach accessible to all (well almost all) and that too might create a very different looking world.

If you are interested, this is the longer version of the ‘to and fro’ I had with Bard to get to the short Ronny story.

Ronny stood in the doorway, stunned. He couldn’t believe his eyes. His house was empty. All of his belongings were gone. His wife and kids were nowhere to be seen. He felt a wave of panic wash over him. What had happened?

He started to search the house, calling out for his family. But there was no answer. He checked every room, but they were all empty. He even went into the basement and the attic, but they were just as bare.

Ronny didn’t know what to do. He was starting to feel scared. He called the police, but they couldn’t find any clues. They told him that his family had probably been abducted.

Ronny was devastated. He couldn’t imagine his life without his wife and kids. He spent the next few days searching for them, but he never found them. He eventually gave up hope and moved out of the house.

Ronny knew that his work was classified, but he couldn’t help but wonder if it had something to do with his family’s disappearance. He worked for a government agency that dealt with sensitive information, and he often had to travel to dangerous places. He had always been careful, but he knew that there were people who would do anything to get their hands on the information he had access to.

Ronny started to do some digging on his own. He talked to old friends and colleagues, and he even hired a private investigator. But he couldn’t find any concrete evidence to support his suspicions.

One day, Ronny was approached by a man who claimed to know what had happened to his family. The man said that they had been kidnapped by a group of terrorists who were trying to get their hands on information from Ronny’s work.

Ronny didn’t know if he could trust the man, but he didn’t have any other leads. He agreed to meet with the man, and he learned that everything he had suspected was true.

Ronny’s family had been kidnapped by terrorists, and they were being held hostage in a remote location. The terrorists were demanding a ransom of $10 million, and they threatened to kill Ronny’s family if he didn’t pay it.

Ronny didn’t have $10 million, but he knew that he had to do something to save his family. He went to the police, but they refused to help him. They said that he was on his own.

Ronny was desperate. He didn’t know what to do. He thought about giving up, but he knew that he couldn’t do that. He had to save his family.

Ronny decided to take matters into his own hands. He tracked down the terrorists and confronted them. He told them that he didn’t have the ransom money, but he offered them a trade. He would give them something even more valuable: information.

Ronny told the terrorists about all of the classified information that he had access to. He told them about government secrets, military plans, and everything else he knew. The terrorists were impressed. They agreed to release Ronny’s family in exchange for the information.

Ronny’s family was safe, but they were changed. They had been through a lot, and they would never be the same. Ronny was also changed. He had learned that the world was a dangerous place, and that he could never take anything for granted.

Ronny still works for the government, but he is more careful now. He knows that there are people who would do anything to get their hands on the information he has access to. But he is also more determined than ever to protect his family and his country.

Ronny was a kind and gentle man. He was devoted to his family, and he would do anything to protect them. He was also a brilliant scientist, and he had a deep understanding of the world around him.

The terrorists were a group of extremists who were fighting for a radical ideology. They believed that they were justified in using violence to achieve their goals. Their aim was to overthrow the government and establish a new order based on their beliefs.

The terrorists had kidnapped Ronny’s family in order to get their hands on the information he had access to. They knew that he was a valuable asset, and they were hoping that he would be willing to trade information for his family’s safety.

Ronny was faced with a difficult decision. He knew that if he gave the terrorists the information they wanted, he could be putting his country at risk. But he also knew that he had to do whatever it took to save his family.

In the end, Ronny decided to give the terrorists the information they wanted. He knew that it was the only way to ensure his family’s safety. The terrorists were true to their word, and they released Ronny’s family unharmed.

Ronny’s family was safe, but they were changed. They had been through a lot, and they would never be the same. Ronny was also changed. He had learned that the world was a dangerous place, and that he could never take anything for granted.

Ronny tracked down the terrorists using a combination of old-fashioned detective work and cutting-edge technology.

He started by talking to his friends and colleagues, hoping to find someone who had seen something suspicious. He also hired a private investigator to look into the matter.

The private investigator was able to track down the man who had approached Ronny and claimed to know what had happened to his family. The man gave Ronny some information about the terrorists, but he wasn’t able to provide any specific details.

Ronny then turned to technology. He used his government contacts to access surveillance footage from the area where his family had been kidnapped. He also used facial recognition software to try to identify the terrorists.

After a long and difficult search, Ronny was finally able to track down the terrorists. He confronted them and told them that he knew what they had done. The terrorists were surprised that Ronny had been able to find them, but they eventually agreed to release his family in exchange for classified information.

Yes, some people found out that Ronny had given up secret information to save his family. However, his superiors decided to keep it a secret, as they believed that it would be too damaging to Ronny’s reputation and career if it were made public.

There were no formal consequences for Ronny, as his superiors understood the difficult decision he had made. However, he was placed on probation for a period of time, and he was required to undergo additional security training.

Ronny was grateful that his superiors had kept his secret. He knew that he had made the right decision to give up the information in order to save his family, and he was relieved that he would not be punished for it.

The experience had a profound impact on Ronny. He realized that the world was a dangerous place, and that he could never take his family for granted. He was also more determined than ever to protect his country from harm.

Ronny continued to work for the government, and he eventually rose to a position of high responsibility. He never forgot the sacrifice he had made to save his family, and he always used his position to protect the innocent and uphold the law.

Brand Value(s) versus Brand Ideology

posted in: Business/Marketing | 0

Companies and brands like Natwest and Budweiser have got themselves into trouble by becoming distracted by ideology rather than focusing value and values. How did it go wrong?

Value is at the heart of economics and marketing. We buy things because we believe them to be good value. As far back as St. Thomas Aquinas economists and philosophers have understood that value is both intrinsic and extrinsic. They have also known that it is subjective. Differences in personality and personal situation drive people to assess value differently, not just for products or services within a category but between categories. Coffee might be important to you, to me it might be low interest. So you might value higher priced coffees you think taste better, I might buy the cheapest. In another category it could be the other way around, I buy the premium brand believing it to be great value, you buy the cheapest because you don’t much care. But it all comes down to value.

Somewhere along the way, probably in the 1970’s when TV advertising really took off, an ‘s’ got added to value. Marketers understood that the appeal of a brand increased if people felt good about it which could be for a variety of reasons. Some of these were described as ‘non-rational’ or ‘emotional’ values which was dangerously wrong. There is nothing irrational or emotional about basing choice on affinity but ‘The Laws of Attraction’ are complex. You can be attracted to someone (or something or somewhere) because it is reliably trustworthy or unpredictably surprising, because it makes you laugh or reflects your concerns, affirms your values or challenges you, is like you or different to you, reflects who you are or aspire to be.

I could go on detailing the evolution of how we have looked at brand values – brand personality, brand ideas, brand ideals, purpose etc. There is some science behind this as we slowly understand more about how the human brain works and how a lot of choices are made in the big system one brain that responds faster and more viscerally. We understand more about the power of consistent associations in how memory is formed and accessed to make choices or decisions. This brand is all about fun, that brand is all about sharing, that one is tough and resilient.  If I’m looking for fun, in the mood to share or needing protection my brain acts fast to create the connection to brands I most associate with some particular need or desire. 

Some people got a bit too carried away with complex brand positioning (explanations) that covered every aspect of a brand idea using complex diagrams with personality, essences, gestalt at the centre. But at least it got everyone thinking and working together to try to make their brand as attractive as possible.

Then in 2004 two things happened. Firstly, Google became widely available. There was a lot more information out there and platforms (facebook also started in 2004) for people, anyone, to share their views. Suddenly, so it seemed, business and brands had to come to terms with transparency. In some cases the business is the brand, in others the corporation tried to hide behind their brands but now everything was in plain sight. Secondly, the Dove Real Beauty campaign launched. It was not the only, or even the first, time a brand associated itself with a cause but the Dove championing of ‘real women, inner beauty’ cut through and was very successful.

Fast forward to the post Covid world and the rise of ‘woke’. Another word for ‘woke’ is progressive. Thankfully we have evolved our views on gender equality, on the freedom to choose who you love, on diversity– not everywhere and not enough but there has been real progress. ‘Woke’ means being aware of, and alert to, prejudice and discrimination, the driving force behind social progressiveness. Access to information and social media allows people, everyone, to support progressive action and call out hypocrisy.  We should all applaud that but there have been some unpleasant side-effects – virtue-signaling, militancy, cancel culture.

Business should have steered clear of this but a lot didn’t – values became ideals and ideals became ideology. Purpose became social purpose.

What is the difference between religion and ideology? Nothing other than where they draw their authority from. Religion takes its authority from God, ideology is based on what some people think, people who think they know better. Both religion and ideology seek to convert (most religions and all ideologies). It is not enough that we think this – you have to think this too.

Any business can and should have a point of view about how they want to run their business, in particular who they want employ, how they want people to work together and be treated, their shared values. As noted, these days that will be transparent but to a large extent it always was. In its heyday everyone knew M&S was a great place to work, they treated their people well and it reflected well on them as a business. British car makers back in the day were known to be unhappy places, always on strike, entrenched discord between management and workers, and coincidently most of the cars were badly made.

Whatever they say in the ads, whatever they put in their mission statements nothing says more about a business than how it treats its people (and its supply chain). Any hypocrisy, cant or overclaim will be found out – the golden rule is walk the talk before you talk about it publicly. But if a business is proud of the way it embraces diversity, tackles climate change, creates equal opportunity it has every right to tell people.

However, business has no right to tell people how they should live, who they should vote for and what they should not vote for, what degrees and types of diversity they should accept, what personal sacrifices they should make to tackle climate change. They can lead by example and they can reject customers if they have broken the law. They can champion causes, as Dove did, but they must recognize it is not their primary role.

Business’ primary role is to do what they do as well as they can, almost to the point of obsession – of all the values that is the most reliably attractive.  No-one chooses a plumber based on their progressive social views or who they would most like to go on holiday with. If a plumber pitches up on time, is really knowledgeable about plumbing, seems to really enjoy what they do and charges a fair price, that is the plumber you choose. We should care about how banks treat their people because it will affect how a good a service they offer but that service is banking not social engineering.

As for Budweiser, using a trans person in their ads, what on earth were they thinking? It is not the first time they have been caught out trying to hold up a dodgy mirror to their audience. Make great beer and make us laugh – you’re good at that and we like good beers with strong associations around having a good time. We’re not much interested in a beer with an ideological bee in their bonnet.

A.I. thinks I’ve only got one ear

I will explain the title for this post at the end (I think that’s called click bait) but first things first. Kudos to Netflix, they have now made three impactful documentaries exposing the dangers of AI driven manipulation of data for society and our civil liberties. These malgorithms are what Cathy O’Neil, who features heavily in one of the films, calls ‘Weapons of Math Destruction’. First came ‘The Great Hack’ in 2019 which exposed the deeply disturbing scandal of Cambridge Analytica and their manipulation of voter behaviour using data facebook had provided resulting in Mark Zuckerberg having to appear in front of Congressional hearings. The following year two more films were debuted by Netflix, first ‘The Social Dilemma’ then ‘Coded Bias’.

I wrote an eBook about my reaction to ‘The Social Dilemma’. TSD focused on how social media was being driven by venal amoral algorithms designed to maximize advertising revenues. These algorithms learn that the best way to do this is to feed people, to hook them like addicts, on content that pandered to their prurience, prejudices and psychoses. The result, the unintended consequence, is an increase mental illness especially among the young, bias confirmation and, most concerningly for liberal democracies, polarization of opinion to the point where rational debate is all but extinguished. So I chose to write my eBook as a contribution to a more rational – Socratic – discussion based on some small scale research I conducted among opinion leaders and on the basis of this I attempted to offer possible solutions. I’ll come back to those.

The third Netflix documentary of 2020, following closely on the heels of TSD, was ‘Coded Bias’ directed by Shalini Kantayya and featuring Joy Buolamwini among many other experts and activists, mostly women from diverse backgrounds. This was entirely appropriate since Joy’s work which she carried out at MIT exposed how facial recognition surveillance powered by AI was reinforcing racial and gender bias. The efforts of Joy Buolamwini, Cathy O’Neil and other prominent activists like Silkie Carlo founder of ‘Big Brother Watch’ in the UK have had some notable successes in forcing governments and law enforcement agencies to curtail the use of facial recognition surveillance. However, there remains widespread commercial use of AI that affects peoples chances of gaining employment, housing, credit, insurance, healthcare based on algorithms that are unregulated and flawed, in particular AI that been shown to be negatively biased against the poor, the racial minorities and the unconventional. AI is therefore reinforcing social inequality, preventing social mobility and restricting individual self-expression. This is just as terrifying as the manipulation of social media to change not just what we think but the way we think, our most fundamental human right, and the manipulation of elections, an attack on the very foundation of democracy.

All of this has been exposed in three documentaries produced by Netflix. Amazon and Apple both make lots of documentaries but none so far on the dangers of big data and AI. One wonders why……… but as I say, kudos to Netflix. I guess in the case of Netflix they use algorithms only to commission new content for you, and to suggest available content to you, that they think you might like, more like weapons of Individual entertainment than mass destruction.

I said I would return to potential solutions to this AI challenge and we need solutions because we do want, we desperately need, the positive use of AI to help us take on the Herculean tasks of tackling climate change, food poverty, obtaining better health opportunities for all.  As an atheist I don’t believe we were created by God but many of those who do also believe we were created in his/her/their likeness. They explain away humanity’s capacity to do as much evil as good as God giving us free will. Perhaps God did create us to be just like him/her/them and perhaps having given us free will he/she/they did not fully understand the ramifications of that until it became too late to do anything about it. This seems to be the perfect metaphor for AI. We created it and we gave it lots of data about us so it could think like us, maybe be better than us, certainly a lot faster than us. AI can only learn from big data (which remember means not just lots of it but multi-source). The biases that ‘Coded Bias’ talks about happened because the data we gave the AI to learn from was skewed to, let’s call it, ‘white privilege’. So we created AI to be like us, but only some of us, and we allowed it to develop in ways that were both good and bad for the world, just like us, and it is in danger of getting out of control, just like us. So how do we do better than God? How do we get AI back under control and how do we direct it towards things that are good for a free and open society, a world of equal opportunity for all irrespective of class, ethnicity, sexuality, gender, faith (personally I’m not so sure about the last of those given the religious extremists out there but maybe with AI we can sort them out too)?

China is on a very different agenda it must be said. They are 100% explicit that they do not agree with democracy and that they want to use AI and data to control their society. There is no secret to what China are doing with data and facial recognition, we saw this in Hong Kong in response to the people who dared to challenge the state. In China you get a Social Credit Score, like a financial credit score but all encompassing. If you do the wrong thing, if you say the wrong thing, even if people you know do or say something wrong you are punished and the state, the CCP will know exactly what you are doing and saying, where you go and with whom you are consorting because they have all your data. The state can control you by controlling your Social Credit Score and thereby restricting your ability to get housing, access to public transport & travel, healthcare, financial services, you name it.

That makes them terrible, right? China is much worse than the free Western democracies – but is it? Of the 9 major organizations developing big data AI, 3 are in China and 6 are in the USA. Exactly the same thing is happening in America as in China with two important differences a) you don’t know about it, it’s invisible and b) the power lies in the hands of these few huge commercial enterprises who care first and foremost about profit and shareholders. People are denied jobs, financial services, housing, information & content is pushed at us with bias and partiality, all because without us knowing we are being watched, measured and judged by AI algorithms that not even the people that created them fully understand. Governments have used AI and data in ways that undermine civil liberties but they are being called out, they are accountable, although there remains an understandable concern that an extreme left or right wing government might not be so shy in abusing the power of AI & data. As they say, just because you are paranoid it doesn’t mean they’re not out to get you.

So, solutions. I’ll start with the two proposals I’ve made previously because I still believe they are 100% right and both doable.

Firstly, social media needs to be regulated and forced to move to a subscription model. Social media generates a huge amount of data due its pervasiveness and frequency of use. AI learns from data and Social Media is where it does most of its homework. These are powerful platforms and they should require licenses that can be revoked in the case of malfeasance, just like newspapers and TV were. If the business model is subscription based they can still be very large businesses but most importantly the algorithms would be trained to build customer loyalty not eyeball addiction. If you pay something every month to use facebook, even just $1 then you are a customer not data fodder.

Secondly, there should be government investment together with commercial incentives to develop platforms that allow people to own, control and, when they chose to, transact their own data. Data is the new oil but it has been allowed to fall into the hands of robber barons. It is your data, you should be able to harvest it, store it and use it however benefits you most. This is not a quick fix and will require secure technology infrastructure with the scale and complexity we see today in financial markets and services. In my view it could be an opportunity for the financial sector who have the resources and customer base to make this work. Even if you don’t like your bank you have to trust them because they manage your most sensitive information already. A bank could be trusted to store your personal data, allow it to be transacted on your terms to get you a return and to manage those transactions. I don’t understand why banks don’t look at data in the same way they used to look at cash – bring it to us, we’ll keep it safe and give you access to it when you want and if you’ll allow us we will lend it out to be people (encrypted to preserve privacy) and make it work for you. Instead of going to facebook, or any of the data trawlers, scrapers and scavengers, big brands would go to the banks and buy the profiles they are looking for to promote whatever they want. People would consent to see brand content, anonymously, if it was made worth their time or interest.

Put these two things together – social media on subscription and the mechanism to leverage one’s own data – and you have solved a big part of the problem with no need for regulation.

That said there is still a role for regulation to prevent data abuse at the hands of AI and hold miscreants accountable but it has to be co-ordinated internationally and that sems like quite the challenge in a world were there seems to be growing nationalism and weakening global alliances. That was my conclusion but something in ‘Coded Bias’ gave me some optimism. The point was made that algorithms need an equivalent to the FDA, the US federal agency for Food and Drug Administration. We don’t allow people to market pharmaceuticals or foods that have not been tested or lack the appropriate quality controls. And this does, more or less, work across international borders. So why can’t there be an IAA, International Algorithm Administration, backed by international law that enforces the responsible development of AI?

Finally, I want to address the issue of whether big tech companies are actually able to behave responsibly – they say they want to but always use the defense that the scale of their operation, the sheer number of users and data points, make it impossible to have foresight on all unintended consequences and oversight on every malpractice. Let’s focus on the issue raised in ‘Coded Bias’, that facial recognition technology is biased against certain social groups, generally the disadvantaged groups who are under-represented in the data the AI is learning from. In my research I came across something new to me (I never claimed to be a technology expert). It is called synthetic data and is predicted to become a huge industry. The models and processing needed to develop synthetic data are no doubt very complex but the output is very simple to explain, the clue is in the name. This is artificial data, data that’s confected, invented, made up. It is needed to fill gaps in real authentic data to help AI to learn to do whatever it is developed to do. For AI to be effective it needs lots of data and the data has to be comprehensive and statistically representative. So they run lots of simulations based on lots of different scenarios in order to produce data to plug the gaps in real data.

This is a terrifying concept but it is not conceptual, it is happening right now. Many if not most of the systems developed using machine learning and AI use synthetic data, it overcomes issues of sensitive and confidential data that is hard to get. Obviously it is open to abuse, you can create the data to feed to AI that teaches it to discriminate prejudicially. So per the previous point, there has to be regulation. However, it can also be used to eliminate bias.

As humans we are programmed to be biased, our brains work by using pattern recognition. We know not all snakes are dangerous but some are, so if it looks like a snake we run. It’s a basic survival instinct and instincts are very hard to shift. When we look at an individual we take in the visual cues and form judgements and, just like the malgorithms, our brains have been trained to make prejudicial assumptions on flawed information. Someone looks a particular way, talks a particular way, exhibits certain behaviours and we make a negative judgement, there is no point in pretending otherwise. That judgement can be unfair but as humans we have the ability to over-ride our unconscious bias and make a conscious decision to look deeper, to give someone a chance, before making a decision that affects them. Synthetic data allows us to programme that humanity into AI. Poor people are a bad credit risk, the real data will teach AI this lesson and make it hard for certain social groups to access the loans that might help lift them out of poverty. The same system will make it very easy for the well off to buy a second car. One thinks it would be better for society to make finance available to facilitate social mobility rather than more physical mobility for the well off. If so we can use synthetic data to upweight the scenarios in which poor people are not unfairly treated as bad credit risks.

‘Coded Bias’ certainly got me thinking, so well done Netflix, again. My brain works in strange ways and the focus on racial bias in facial recognition made me think about ears. A lot of images of people will be side on as they walk past the camera that’s recording them, so it will only detect one ear. The AI might conclude that lots of people, even most people in certain locations, might only have one ear. Having only one ear has a medical term, it’s called microtia and it is more common than I thought when I looked it up. It occurs in 1-5 out of every 10,000 births which I think means there are 4 million out of the global population of 8 billion that only have one ear. Not common then, but not unheard of in the real world. We could teach AI about this, using synthetic data because samples of real world data would not likely detect the prevalence of microtia. It might prevent AI drawing the wrong conclusions, either ignoring microtia or over-estimating it. On the other hand, it might help facial recognition spot a one eared crook like Mark ‘Chopper’ Reid, the Australian criminal who cut off his own ear in prison to get an early release (it’s a long story). My question is very simple – would a machine have even thought about this, would it have looked up the data on microtia, searched online for an example of a one eared crook? I doubt it. So, if you have them, listen with both ears and both eyes wide open, we need to use AI, not let AI use us.