Brand Value(s) versus Brand Ideology

posted in: Business/Marketing | 0

Companies and brands like Natwest and Budweiser have got themselves into trouble by becoming distracted by ideology rather than focusing value and values. How did it go wrong?

Value is at the heart of economics and marketing. We buy things because we believe them to be good value. As far back as St. Thomas Aquinas economists and philosophers have understood that value is both intrinsic and extrinsic. They have also known that it is subjective. Differences in personality and personal situation drive people to assess value differently, not just for products or services within a category but between categories. Coffee might be important to you, to me it might be low interest. So you might value higher priced coffees you think taste better, I might buy the cheapest. In another category it could be the other way around, I buy the premium brand believing it to be great value, you buy the cheapest because you don’t much care. But it all comes down to value.

Somewhere along the way, probably in the 1970’s when TV advertising really took off, an ‘s’ got added to value. Marketers understood that the appeal of a brand increased if people felt good about it which could be for a variety of reasons. Some of these were described as ‘non-rational’ or ‘emotional’ values which was dangerously wrong. There is nothing irrational or emotional about basing choice on affinity but ‘The Laws of Attraction’ are complex. You can be attracted to someone (or something or somewhere) because it is reliably trustworthy or unpredictably surprising, because it makes you laugh or reflects your concerns, affirms your values or challenges you, is like you or different to you, reflects who you are or aspire to be.

I could go on detailing the evolution of how we have looked at brand values – brand personality, brand ideas, brand ideals, purpose etc. There is some science behind this as we slowly understand more about how the human brain works and how a lot of choices are made in the big system one brain that responds faster and more viscerally. We understand more about the power of consistent associations in how memory is formed and accessed to make choices or decisions. This brand is all about fun, that brand is all about sharing, that one is tough and resilient.  If I’m looking for fun, in the mood to share or needing protection my brain acts fast to create the connection to brands I most associate with some particular need or desire. 

Some people got a bit too carried away with complex brand positioning (explanations) that covered every aspect of a brand idea using complex diagrams with personality, essences, gestalt at the centre. But at least it got everyone thinking and working together to try to make their brand as attractive as possible.

Then in 2004 two things happened. Firstly, Google became widely available. There was a lot more information out there and platforms (facebook also started in 2004) for people, anyone, to share their views. Suddenly, so it seemed, business and brands had to come to terms with transparency. In some cases the business is the brand, in others the corporation tried to hide behind their brands but now everything was in plain sight. Secondly, the Dove Real Beauty campaign launched. It was not the only, or even the first, time a brand associated itself with a cause but the Dove championing of ‘real women, inner beauty’ cut through and was very successful.

Fast forward to the post Covid world and the rise of ‘woke’. Another word for ‘woke’ is progressive. Thankfully we have evolved our views on gender equality, on the freedom to choose who you love, on diversity– not everywhere and not enough but there has been real progress. ‘Woke’ means being aware of, and alert to, prejudice and discrimination, the driving force behind social progressiveness. Access to information and social media allows people, everyone, to support progressive action and call out hypocrisy.  We should all applaud that but there have been some unpleasant side-effects – virtue-signaling, militancy, cancel culture.

Business should have steered clear of this but a lot didn’t – values became ideals and ideals became ideology. Purpose became social purpose.

What is the difference between religion and ideology? Nothing other than where they draw their authority from. Religion takes its authority from God, ideology is based on what some people think, people who think they know better. Both religion and ideology seek to convert (most religions and all ideologies). It is not enough that we think this – you have to think this too.

Any business can and should have a point of view about how they want to run their business, in particular who they want employ, how they want people to work together and be treated, their shared values. As noted, these days that will be transparent but to a large extent it always was. In its heyday everyone knew M&S was a great place to work, they treated their people well and it reflected well on them as a business. British car makers back in the day were known to be unhappy places, always on strike, entrenched discord between management and workers, and coincidently most of the cars were badly made.

Whatever they say in the ads, whatever they put in their mission statements nothing says more about a business than how it treats its people (and its supply chain). Any hypocrisy, cant or overclaim will be found out – the golden rule is walk the talk before you talk about it publicly. But if a business is proud of the way it embraces diversity, tackles climate change, creates equal opportunity it has every right to tell people.

However, business has no right to tell people how they should live, who they should vote for and what they should not vote for, what degrees and types of diversity they should accept, what personal sacrifices they should make to tackle climate change. They can lead by example and they can reject customers if they have broken the law. They can champion causes, as Dove did, but they must recognize it is not their primary role.

Business’ primary role is to do what they do as well as they can, almost to the point of obsession – of all the values that is the most reliably attractive.  No-one chooses a plumber based on their progressive social views or who they would most like to go on holiday with. If a plumber pitches up on time, is really knowledgeable about plumbing, seems to really enjoy what they do and charges a fair price, that is the plumber you choose. We should care about how banks treat their people because it will affect how a good a service they offer but that service is banking not social engineering.

As for Budweiser, using a trans person in their ads, what on earth were they thinking? It is not the first time they have been caught out trying to hold up a dodgy mirror to their audience. Make great beer and make us laugh – you’re good at that and we like good beers with strong associations around having a good time. We’re not much interested in a beer with an ideological bee in their bonnet.

A.I. thinks I’ve only got one ear

I will explain the title for this post at the end (I think that’s called click bait) but first things first. Kudos to Netflix, they have now made three impactful documentaries exposing the dangers of AI driven manipulation of data for society and our civil liberties. These malgorithms are what Cathy O’Neil, who features heavily in one of the films, calls ‘Weapons of Math Destruction’. First came ‘The Great Hack’ in 2019 which exposed the deeply disturbing scandal of Cambridge Analytica and their manipulation of voter behaviour using data facebook had provided resulting in Mark Zuckerberg having to appear in front of Congressional hearings. The following year two more films were debuted by Netflix, first ‘The Social Dilemma’ then ‘Coded Bias’.

I wrote an eBook about my reaction to ‘The Social Dilemma’. TSD focused on how social media was being driven by venal amoral algorithms designed to maximize advertising revenues. These algorithms learn that the best way to do this is to feed people, to hook them like addicts, on content that pandered to their prurience, prejudices and psychoses. The result, the unintended consequence, is an increase mental illness especially among the young, bias confirmation and, most concerningly for liberal democracies, polarization of opinion to the point where rational debate is all but extinguished. So I chose to write my eBook as a contribution to a more rational – Socratic – discussion based on some small scale research I conducted among opinion leaders and on the basis of this I attempted to offer possible solutions. I’ll come back to those.

The third Netflix documentary of 2020, following closely on the heels of TSD, was ‘Coded Bias’ directed by Shalini Kantayya and featuring Joy Buolamwini among many other experts and activists, mostly women from diverse backgrounds. This was entirely appropriate since Joy’s work which she carried out at MIT exposed how facial recognition surveillance powered by AI was reinforcing racial and gender bias. The efforts of Joy Buolamwini, Cathy O’Neil and other prominent activists like Silkie Carlo founder of ‘Big Brother Watch’ in the UK have had some notable successes in forcing governments and law enforcement agencies to curtail the use of facial recognition surveillance. However, there remains widespread commercial use of AI that affects peoples chances of gaining employment, housing, credit, insurance, healthcare based on algorithms that are unregulated and flawed, in particular AI that been shown to be negatively biased against the poor, the racial minorities and the unconventional. AI is therefore reinforcing social inequality, preventing social mobility and restricting individual self-expression. This is just as terrifying as the manipulation of social media to change not just what we think but the way we think, our most fundamental human right, and the manipulation of elections, an attack on the very foundation of democracy.

All of this has been exposed in three documentaries produced by Netflix. Amazon and Apple both make lots of documentaries but none so far on the dangers of big data and AI. One wonders why……… but as I say, kudos to Netflix. I guess in the case of Netflix they use algorithms only to commission new content for you, and to suggest available content to you, that they think you might like, more like weapons of Individual entertainment than mass destruction.

I said I would return to potential solutions to this AI challenge and we need solutions because we do want, we desperately need, the positive use of AI to help us take on the Herculean tasks of tackling climate change, food poverty, obtaining better health opportunities for all.  As an atheist I don’t believe we were created by God but many of those who do also believe we were created in his/her/their likeness. They explain away humanity’s capacity to do as much evil as good as God giving us free will. Perhaps God did create us to be just like him/her/them and perhaps having given us free will he/she/they did not fully understand the ramifications of that until it became too late to do anything about it. This seems to be the perfect metaphor for AI. We created it and we gave it lots of data about us so it could think like us, maybe be better than us, certainly a lot faster than us. AI can only learn from big data (which remember means not just lots of it but multi-source). The biases that ‘Coded Bias’ talks about happened because the data we gave the AI to learn from was skewed to, let’s call it, ‘white privilege’. So we created AI to be like us, but only some of us, and we allowed it to develop in ways that were both good and bad for the world, just like us, and it is in danger of getting out of control, just like us. So how do we do better than God? How do we get AI back under control and how do we direct it towards things that are good for a free and open society, a world of equal opportunity for all irrespective of class, ethnicity, sexuality, gender, faith (personally I’m not so sure about the last of those given the religious extremists out there but maybe with AI we can sort them out too)?

China is on a very different agenda it must be said. They are 100% explicit that they do not agree with democracy and that they want to use AI and data to control their society. There is no secret to what China are doing with data and facial recognition, we saw this in Hong Kong in response to the people who dared to challenge the state. In China you get a Social Credit Score, like a financial credit score but all encompassing. If you do the wrong thing, if you say the wrong thing, even if people you know do or say something wrong you are punished and the state, the CCP will know exactly what you are doing and saying, where you go and with whom you are consorting because they have all your data. The state can control you by controlling your Social Credit Score and thereby restricting your ability to get housing, access to public transport & travel, healthcare, financial services, you name it.

That makes them terrible, right? China is much worse than the free Western democracies – but is it? Of the 9 major organizations developing big data AI, 3 are in China and 6 are in the USA. Exactly the same thing is happening in America as in China with two important differences a) you don’t know about it, it’s invisible and b) the power lies in the hands of these few huge commercial enterprises who care first and foremost about profit and shareholders. People are denied jobs, financial services, housing, information & content is pushed at us with bias and partiality, all because without us knowing we are being watched, measured and judged by AI algorithms that not even the people that created them fully understand. Governments have used AI and data in ways that undermine civil liberties but they are being called out, they are accountable, although there remains an understandable concern that an extreme left or right wing government might not be so shy in abusing the power of AI & data. As they say, just because you are paranoid it doesn’t mean they’re not out to get you.

So, solutions. I’ll start with the two proposals I’ve made previously because I still believe they are 100% right and both doable.

Firstly, social media needs to be regulated and forced to move to a subscription model. Social media generates a huge amount of data due its pervasiveness and frequency of use. AI learns from data and Social Media is where it does most of its homework. These are powerful platforms and they should require licenses that can be revoked in the case of malfeasance, just like newspapers and TV were. If the business model is subscription based they can still be very large businesses but most importantly the algorithms would be trained to build customer loyalty not eyeball addiction. If you pay something every month to use facebook, even just $1 then you are a customer not data fodder.

Secondly, there should be government investment together with commercial incentives to develop platforms that allow people to own, control and, when they chose to, transact their own data. Data is the new oil but it has been allowed to fall into the hands of robber barons. It is your data, you should be able to harvest it, store it and use it however benefits you most. This is not a quick fix and will require secure technology infrastructure with the scale and complexity we see today in financial markets and services. In my view it could be an opportunity for the financial sector who have the resources and customer base to make this work. Even if you don’t like your bank you have to trust them because they manage your most sensitive information already. A bank could be trusted to store your personal data, allow it to be transacted on your terms to get you a return and to manage those transactions. I don’t understand why banks don’t look at data in the same way they used to look at cash – bring it to us, we’ll keep it safe and give you access to it when you want and if you’ll allow us we will lend it out to be people (encrypted to preserve privacy) and make it work for you. Instead of going to facebook, or any of the data trawlers, scrapers and scavengers, big brands would go to the banks and buy the profiles they are looking for to promote whatever they want. People would consent to see brand content, anonymously, if it was made worth their time or interest.

Put these two things together – social media on subscription and the mechanism to leverage one’s own data – and you have solved a big part of the problem with no need for regulation.

That said there is still a role for regulation to prevent data abuse at the hands of AI and hold miscreants accountable but it has to be co-ordinated internationally and that sems like quite the challenge in a world were there seems to be growing nationalism and weakening global alliances. That was my conclusion but something in ‘Coded Bias’ gave me some optimism. The point was made that algorithms need an equivalent to the FDA, the US federal agency for Food and Drug Administration. We don’t allow people to market pharmaceuticals or foods that have not been tested or lack the appropriate quality controls. And this does, more or less, work across international borders. So why can’t there be an IAA, International Algorithm Administration, backed by international law that enforces the responsible development of AI?

Finally, I want to address the issue of whether big tech companies are actually able to behave responsibly – they say they want to but always use the defense that the scale of their operation, the sheer number of users and data points, make it impossible to have foresight on all unintended consequences and oversight on every malpractice. Let’s focus on the issue raised in ‘Coded Bias’, that facial recognition technology is biased against certain social groups, generally the disadvantaged groups who are under-represented in the data the AI is learning from. In my research I came across something new to me (I never claimed to be a technology expert). It is called synthetic data and is predicted to become a huge industry. The models and processing needed to develop synthetic data are no doubt very complex but the output is very simple to explain, the clue is in the name. This is artificial data, data that’s confected, invented, made up. It is needed to fill gaps in real authentic data to help AI to learn to do whatever it is developed to do. For AI to be effective it needs lots of data and the data has to be comprehensive and statistically representative. So they run lots of simulations based on lots of different scenarios in order to produce data to plug the gaps in real data.

This is a terrifying concept but it is not conceptual, it is happening right now. Many if not most of the systems developed using machine learning and AI use synthetic data, it overcomes issues of sensitive and confidential data that is hard to get. Obviously it is open to abuse, you can create the data to feed to AI that teaches it to discriminate prejudicially. So per the previous point, there has to be regulation. However, it can also be used to eliminate bias.

As humans we are programmed to be biased, our brains work by using pattern recognition. We know not all snakes are dangerous but some are, so if it looks like a snake we run. It’s a basic survival instinct and instincts are very hard to shift. When we look at an individual we take in the visual cues and form judgements and, just like the malgorithms, our brains have been trained to make prejudicial assumptions on flawed information. Someone looks a particular way, talks a particular way, exhibits certain behaviours and we make a negative judgement, there is no point in pretending otherwise. That judgement can be unfair but as humans we have the ability to over-ride our unconscious bias and make a conscious decision to look deeper, to give someone a chance, before making a decision that affects them. Synthetic data allows us to programme that humanity into AI. Poor people are a bad credit risk, the real data will teach AI this lesson and make it hard for certain social groups to access the loans that might help lift them out of poverty. The same system will make it very easy for the well off to buy a second car. One thinks it would be better for society to make finance available to facilitate social mobility rather than more physical mobility for the well off. If so we can use synthetic data to upweight the scenarios in which poor people are not unfairly treated as bad credit risks.

‘Coded Bias’ certainly got me thinking, so well done Netflix, again. My brain works in strange ways and the focus on racial bias in facial recognition made me think about ears. A lot of images of people will be side on as they walk past the camera that’s recording them, so it will only detect one ear. The AI might conclude that lots of people, even most people in certain locations, might only have one ear. Having only one ear has a medical term, it’s called microtia and it is more common than I thought when I looked it up. It occurs in 1-5 out of every 10,000 births which I think means there are 4 million out of the global population of 8 billion that only have one ear. Not common then, but not unheard of in the real world. We could teach AI about this, using synthetic data because samples of real world data would not likely detect the prevalence of microtia. It might prevent AI drawing the wrong conclusions, either ignoring microtia or over-estimating it. On the other hand, it might help facial recognition spot a one eared crook like Mark ‘Chopper’ Reid, the Australian criminal who cut off his own ear in prison to get an early release (it’s a long story). My question is very simple – would a machine have even thought about this, would it have looked up the data on microtia, searched online for an example of a one eared crook? I doubt it. So, if you have them, listen with both ears and both eyes wide open, we need to use AI, not let AI use us.

Is it time to be seriously worried about AI?

They say the time to get worried on an airplane is when the crew look scared. So if 1,100 of the top technology leaders and developers publish an open letter warning of the potential risks of AI to the future of the human race and calling for a 6 month moratorium on further AI development, people like Elon Musk and Steve Wozniak (but interestingly not Bill Gates or Mark Zuckerberg), it surely has to be time to be, at the very least, concerned.

The trigger for this major red flag would appear to be the release of ‘ChatGPT’ by OpenAI, the output of their Artificial Intelligence system, ‘GPT- 4’. Let’s unpack this. OpenAI is a research laboratory funded by the likes of Elon Musk and latterly Bill Gates/Microsoft. OpenAI is a not-for-profit organisation but it has a very much for-profit subsidiary, OpenAI Limited Partnership that commercialises what the lab develops. The launch of ChatGPT took the valuation of OpenAI LTD to over $29 billion. GPT-4 stands for ‘Generative Pre-Trained Transformer, 4th edition’ and it is a ‘multi-model large language model’ which in effect means something very, very intelligent that you can talk to through its chat box, ChatGPT. You can ask it questions and it will reply, like a kind of talking search engine (Google are very worried about it because their version, ‘Bard’, is by all accounts not as good). But ChatGPT can do much more, it can create stuff, letters, essays, stories, poems, it can even write code.

The Open Letter calls for GPT-4 to be the line in the sand, the signal to pause and figure out how this should all be governed, what oversight and guardrails need to be put in place to avoid AI getting out of control and to avoid unintended consequences.

I have been worried about AI for a long time and I have occasionally shared my concerns and what drives them in my blogs and eBooks. I’m not a technology expert by any stretch of the imagination but I do have a very elastic imagination (which in my defense Einstein said was important than knowledge). Imagination is fueled by, to give it the posh word, the zeitgeist. You sniff what is in the air culturally, join the dots and let your imagination do the rest. I can be more specific than that and point to three things, three realizations that joined some dots and got me worried about AI, social media and robotics.

The first realization was that if you see it in the movies (or in the pages of science fiction) it has a habit of coming true – science fiction more often than not becomes non-fiction. If you live long enough you see the widespread adoption of technologies that were wild ideas in old films, TV shows and books – digital photography (‘The Man who fell to Earth’), the iPhone (Star Trek), virtual reality (‘Brainstorm’), ChatGPT (computers like Hal in ‘2001 A space odyssey’ and just about every other deep space film or show). We can’t yet ‘beam me up Scotty’, nor do we have robots that are indistinguishable from humans, robots that can form and reform into any shape  or alien invasions intent on mining the earth so it hasn’t all become reality, yet…..

Nevertheless, it struck me that sci-fi is a kind of forward memory so when I watched movies like ‘Phenomenon’ or ‘Lucy’ it made me think about the possibility of the human brain working at 100% capacity but when I watched ‘The Matrix’, ‘Ex-machina’ or ‘Westworld’ I worried that machines would get there first because, motivated by power and greed, we would facilitate this.

The second realisation was that the internet was becoming a connected global brain (I’m not claiming to be the only person to have spotted this). On the one hand the internet was becoming the warehouse for every single thing anyone and everyone had ever written throughout history, every single human artefact, and on the other hand the world was increasingly uploading every thought, comment, experience and bit of data with the ability to search and connect all this, therefore we were/are witnessing the creation of the biggest brain imaginable. Not a human brain belonging to one person made up of soft tissues, cells, blood, nerves and neural pathways but a digital brain that combines the brains of every person past and present. The key to unlocking the power of this global digital brain would lie in the intelligence of whoever – or whatever – interrogated it. You need a powerful search engine and the most efficient intelligence using that engine – the human brain is highly efficient in some ways but it is nowhere near as fast as super computers with artificial intelligence. We may (at the moment) be able to ask smarter, more insightful and imaginative questions but at a maximum rate of 1000 per second (one synaptic transmission per millisecond) which is 10 million times slower than a computer. A computer can pose 10 billion questions in one second and process the answers just as fast.

“When someone points at the moon only the fool looks at the finger” Old Chinese Proverb.

Since I first came across this old proverb decades ago it has stuck in my mind. It resurfaced when thinking about social media and sparked my third realization. Like many people I was greatly affected by Netflix’ ‘The Social Dilemma’, so much so I wrote an eBook about it. In that eBook I touch on the idea of unfettered AI possibly being the beginning of the end of life as we know it but the main idea I put forward is that AI powered Social Media is the enemy of the kind of Socratic debate we desperately need to address the world’s challenges. The solutions I put forward are very simple a) we need to create the means for people to own and transact, on their own terms and for their own benefit, their personal data and b) there has to be regulation to force social media to be subscription based and not dependent on advertising – if you don’t pay for the product you are the product. Notwithstanding, my realisation was that if AI were used primarily for commercial purposes by organisations impervious to oversight, governance and, where necessary, regulation then the finger would not be not pointing at the moon, it would be pointing at existential jeopardy. My realisation was that the unintended negative consequences of AI driven Social Media was the canary in the cage and it just fell off its perch.

In times of trouble turn to the BBC (Radio 4 to be precise). In 2021 Stuart Russell delivered the Reith Lectures and his subject was AI and how to get it right. Russell is Professor of Computer Science at Berkeley and an expert in AI. I commend everyone to listen to his series of lectures. It reassured me somewhat but like most people, if you feel worried but powerless you gravitate to any wise person who says it will be, just might be, OK. Having read the Open Letter I have been jolted out of my fragile sense of security.

I am determined to end on something positive but I must first join some more dots that I see. The Transgender debate is toxic, it is nigh on impossible to make any kind of comment or observation without being branded a ‘transphobe’ but I will try nonetheless. Some people have commented that gender fluidity has tended to surface towards the end of a particular civilization or empire and have found examples of this (possibly with selective bias) going back thousands of years. Who knows whether transgender presages the decline and fall of empire but for sure it is not a new cultural phenomenon. I would simply make the obvious observation that if we are heading for a world where AI machines and robots increasingly replace humans then gender is irrelevant, even biological gender aka sex, because biology is irrelevant.

Right now I do not know whether to be more concerned about AI or Geo-Politics or Climate Change or another Pandemic (zoonotic or man-made) – we live in troubling times and for the most part I choose to worry more about whether Steve Borthwick can turn around the England Rugby Team in time to be competitive at the forthcoming World Cup. (If I had time I would explain that sport is honestly something that gives me faith in humanity, it shows that at our best we can embrace diversity and compete while still remaining friends).

But back to Geo-Politics, it is hard to ignore the threat of a global confrontation of ideologies. On the one hand you have China and Russia (and Iran, Saudi Arabia among a few others) who reject democracy in favour of a totalitarian and repressive form of government. Even if the more liberal democratic nations come together to respond responsibly to the challenge of AI can we assume the same will be true of totalitarian states? China has very deliberately amassed the tools to control its society through the control of all their data, they are hardly likely to hold back on the most advanced use of AI to further this aim and enable them to establish global hegemony. Anyone who doubts this just read ‘The Great China Plan’ – they pretty much spell it out. They invented the idea of ‘kow tow’ and they intend to bring it back.

If the west takes a 6 month moratorium, will China and Russia do the same or are we just handing them a lead at the most crucial time?

I said I would end on something positive, and it is this. As Social Apes we are only meant to be able to live in small groups of 100 or fewer. At the start of the first millennium the world population was about 100 million. The biggest city was Alexandria with about 1 million (similar to Rome 200 years later) but the few other cities that existed were much smaller than that, less than 100 thousand, most people lived in villages. By 1100 the world population was somewhere between 300 and 400 million and cities had not grown in size or number. In the 1940’s the world population had exploded to over 2 billion, today it is 7.9 billion, there are over 500 cities with more than a million people and 31 megacities with 10 million or so. Fewer than half of us live in rural villages, most of live in cities. How is this possible – technology.

Technology, including AI, perhaps especially AI, has the power to solve more problems than it creates. If we look at the some of the most troubling and intractable problems we face – climate change, health, food & water poverty –  it might be that only with the power of AI can we hope to address them. As the Open Letter ends:-

“Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an “AI summer” in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt.”

As for China, history shows repressive societies always fail. Forget Orwell’s ‘1984’ and its depressing ending where Winston Smith accepts his death in service of ‘The Party’ while selling everyone out. Read Karl Popper’s “Open society and its enemies’ written at the end of World War II – liberal democracies will always win (eventually) because they champion peaceful progress and that is what most of want and, ironically, will fight for.

In China there is no doubt the driving force behind AI is to control people, suppress government opposition (as we saw in Hong Kong) to support totalitarianism. In the West the driving force is commercial and in a democracy that can be controlled. It does however require international collaboration to create effective regulation and accountability.

While writing this post I had BBC Radio 4 on in the background and this very subject was being debated, the Open Letter and its ramifications. The conclusion was that it was welcome and timely and that it should and would engage democratic debate. Here’s hoping – at least some of the airplane’s crew look calm.

Edited version of the Open Letter

Pause Giant AI Experiments: An Open Letter

Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control…….

………We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.

Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable……

………..Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an “AI summer” in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society.  We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall.

Remember to put the market in marketing

posted in: Business/Marketing | 0

Many moons back I used to have a corny line that I would, it shames me to admit, trot out every now and again. “I put the Mark in marketing”. I was joking but then one night, over too many glasses of wine, my good friend and highly talented advertising man, Mark Fiddes decided we should use this idea to start a new agency staffed entirely by people called Mark. We started to go through all the Mark’s we knew and admired in the industry and quite quickly came to the conclusion we could field a great team. The more we talked about it the more we convinced ourselves this was a brilliant idea. It was distinctive, everyone would hear about it, we could have endless fun with the idea – ‘Top Marks’, ‘Advertising that always hits the Mark’, ‘MARKeting that cuts through’. Anyone calling the agency would be greeted with “Hi, you’re through to Marks, which Mark would you like to speak to?”. In pitches we would say “Mark is going to take you through the overall strategy for the campaign, then Mark will share the creative work and finally Mark will explain how we want to execute the idea in media and brand activation”. Hilarious. One obvious flaw, not a lot of diversity, so we decided that anyone else we wanted to bring on board, male or female, would have to change their name to Mark. Amazingly the idea never went any further.

So, forget the Mark in marketing, what about the market? Who puts the market in marketing thinking? Let me explain. Even with the best marketing in the world you can still experience catastrophic sales decline, even brand oblivion, if you get the market wrong. It’s not that hard to look back and see where and to whom this happened – if you are of a certain age. No-one under the age of 30 years old will remember Kodak, Motorola, Toshiba, Tie-rack, HMV. They will know about IBM but as a diversified software business not as the titan of computers. Not all these businesses/brands had great marketing, some did, some didn’t need to such was their grip on the market.

Kodak is the saddest case study. In the 1980’s Kodak were a massive global business with a brand name as well-known as Coca Cola. They employed hundreds of thousand people, had revenues well over $10 billion a year with 80% plus market share of both the film and camera market and a strong photo-copier business (not as big as Xerox and who remembers them now?). What happened – the market changed, it went digital.  In 2012 Kodak filed for bankruptcy. If you judge marketing by advertising and promotions Kodak was pretty good. If you judge marketing the old-fashioned way, access to market, Kodak was outstanding. They had the best sales people and as near as makes no difference 100% distribution in any outlet worldwide that even thought about selling film (for the youngsters, that is what we used to put in cameras). Did they fail to see digital coming? No they didn’t, they under-estimated it, both the speed at which it would happen and the impact it would have on cameras (and eventually phones). Kodak had digital-imaging patents worth $3 billion it had to sell off to buy more time. If you don’t read the market – ideally lead the market – then no amount of marketing will help you.

If you are old enough you can easily look back and see who failed to read the market. You can recall incredibly well known brands that died off because the market moved away from them. No-one could ever imagine that IBM or Kodak or Nokia would ever drop off the radar but I guess we all know that changes in technology are hard to assess. But that doesn’t explain Blockbuster, Tie-rack, Watney’s, HMV – they were not technology businesses per se and though technology played a hand in some cases it was not technology that was hard to acquire. In the case of Watney’s or Tie-rack they just failed to see how the market was changing but they must have known. Levi’s very nearly made the same mistake. I remember meeting the CEO of Levi’s Europe, an American. They were a client and he and I were about the same age. He was not as enthusiastic as his more junior colleagues about the work we were about to do for them, some ethnographic research among young opinion leaders in Paris, London and Milan. I pointed out to him the problem they had to tackle “You and I are well into our 40’s and we’re both wearing Levis because we think they are cool – that ‘ the problem, we’re not cool, they are, and they’re not wearing Levis because we are.”

Watney’s must have known all their mates wanted to drink Stella Artois. Tie-rack must have noticed that fewer and fewer of the commuters arriving at Waterloo were wearing suits and ties. Someone has to ask the big 3 questions:-

1. What is happening in the world?

2. How will that affect our market?

3.How and when do we need to adapt our product and brand, maybe change our whole business model?

It should be the most senior marketer but it rarely is, they do ‘marketing’ not business strategy. I have said many times and still believe that Steve Jobs was the world’s greatest marketer. He did ask those questions, he did develop his brand, his products and his business model at the right time, slightly before he needed to. And he was really good at what most people think of as ‘marketing and advertising’. With no formal training in marketing he understood market segmentation, targeting, the power of different, the impact of a creative idea, bold media choices, building disciples not consumers, brand events etc etc.

“It’s a funny sort of memory that only works backwards”, said the Queen to Alice.

We can look back and be astonished that brands as big as IBM or Kodak failed to read the market and/or react but, ‘remembering’ forward, which brands among the huge behemoths today, the ones we think of as ‘too big to fail’, might do just that? Apple, Amazon, Google, Facebook, Netflix – which ones will still be as big or bigger in 50 years time, which ones will even be around?

I think only Apple because they have built on the legacy of Steve Jobs – great marketing and a great ability to read (lead) the market. Apple is a market focused brand.

Facebook are not reading the world or the market – public opinion is turning against people manipulating OUR data for THEIR benefit. Facebook are more like a self-serving ideology than a market focused brand.

Netflix is just a platform with little in the way of barriers to competition or brand loyalty. Anyone can do to them what they did to Blockbusters.

Ditto Google – the day a search engine with the latest/best AI offers us something better with just one click Google are no longer the default. All that might be left is the verb, ‘to google something’.

Amazon – don’t know, they do seem to be just way too big to fail with a highly efficient business model and innovative mind-set. But they are not a brand, no-one cares about them. If they stuff up it would be the biggest shocker but like IBM no-one would shed a tear.

Kodak still makes me sad. I guess they did not have anyone like Steve Jobs and I know for a fact they did not listen to their product, sales or marketing people. One of Kodak’s engineers, Steven Sasson, invented the digital camera in 1975. It was big and heavy but they did not do much to perfect it, others did. By the late 80’s their own sales people in Scandinavia, where digital was taking off, were warning of the potential impact but they were told just to work harder to hit their target sales of film.

The pre-requisite of reading a market is to be able to define the market you are in, not the business, the market. I have wonderful memories of Kodak because I associate them with happy occasions, loading film, taking pictures, waiting to see which ones turned out well when you got them back, vivid colours. Kodak advertising in the 1980’s/90’s used this insight, the ads spoke of lasting memories and the importance of lasting colour which is why you had to trust Kodak (ignore the French campaign, that had a couple of munchkins running around, it kinda worked but only the French understood why).

I believe Kodak were in the memory market and that is a huge market growing every day, forwards and backwards. Where does that insight get you? Well what market are Google in if not the global digital memory market?

How to build brands and influence people

posted in: Business/Marketing | 0

I had coffee this week with an old friend who for years had a really successful Luxury Brands marketing company. He shut down the business and has now happily moved on to other interests. Does he miss the old life? Not at all, he said, it all changed. Over years he and his team developed a very successful model to launch and build brands using all the levers – events, advertising, sponsorship, direct marketing etc – deployed in the right sequence. “These days it’s just one lever you need to pull, influencers” he lamented. “No skill in that”. Well maybe but I wanted to start with the premise – can you successfully build brands with just influencers?

To narrow this down I rule out celebrity endorsement – we know that works. Princess Diana was photographed wearing Gucci loafers and sales exploded. Harry Styles has the same kind of global fame and is a global ambassador for Gucci but unlike Diana he gets paid. I’m sure it’s great for Gucci but not as good as unsolicited celebrity endorsement. However, no question, if you can get George Clooney to be the face of your coffee it will work – but it will cost you.

I also rule out influencer sponsorship. Nike has always sponsored the best, most iconic sports stars or teams from the New Zealand All Blacks to Tiger woods to LeBron James. They still do it but so do other people – and it still works. Sports brand Castore has gone from ‘never heard of them’ to ‘must have’ in just a couple of years by securing a very wide-ranging bunch of sponsorship deals. England play South Africa at Cricket – both sides are wearing Castore. But again expensive – Castore have raised over $60 million and have a credit facility of $75 million from backers such as Andy Murray (nice one Andy), HSBC, BNP Paribas and Silicon Valley Bank (whoops).

No, I want to focus on the relatively much less expensive on-line influencer model to which my old mate was referring.  If you google ‘Successful influencer case studies’ as I have done you will get a lot of hits. There are loads and loads of curated sets of case studies – I’m not going to repeat any of them here, you can look them up for yourself. They include big brands like Nike, Boss, Olay, and a host of small brands. As a general rule – there are exceptions – the big brands did not only use on-line influencers because they can afford to do a lot more and the smaller brands did because they can’t.

Some case studies offer hard sales data e.g. number of mattresses sold (yes, on-line mattresses is a fertile area for start-ups who use influencers). In other cases they quote reach/eyeballs. I love sales but I also value fame, awareness and brand saliency. Many moons past the late Jeremy Bullmore (whose wisdom cannot be gainsaid) explained that most of what makes a brand successful is widespread fame. Top of mind awareness is proven to be a lead indicator of brand sales. Saliency – “yes, I’ve not only heard of you, I know something about you that interests me in a context relevant to my life and your brand” – is the jack pot. So I put great value on the reach of influencers and the engagement they generate. They have followers not just eyeballs. Let me elaborate on that. Most advertising – posters, TV, press – gets eyeballs, if they are any good they may get a reaction and they build fame. But unless you are Gold Blend (very old TV ad case study – they created a kind of mini soap opera so you looked forward to seeing the next one) advertising does not get followers, people who put their digital hand up to receive your next missive.

Sifting through all the various case studies I harvested a number of best practice guidelines:-

  1. Choose the right influencers – bang on for your target audience would be a good start. So would value for money and there seems to be a trend towards using micro and ‘nano’ influencers rather than the big macro influencers. I guess that allows you to be very precise with your targeting and they are cheaper as a rule. One great piece of advice was to look at follower engagement not just number of followers.
  • Have a strong creative idea (I’ll come back to that).
  • Collaborate with the influencers, let them produce the content they think will work with their followers based on the creative idea. The result is often more creative, more authentic, and more ‘native’.
  • Give them product coupons they can give away – they love this since they are looking for ways to reward their followers.
  • Drive multi-channel – there are only 5 platforms to focus on: Facebook, Instagram, tik tok, Pinterest and Twitter

The beauty of all this is that the influencers produce the content (not totally free but really, really cheap) and, if good enough, the audience drives the spread of content across channels.

I said I wanted to come back to the creative idea. One case study did catch my eye, Boss. It does involve a celebrity and it was part of a big campaign including a lot of advertising, press and posters. But the top spin was provided by influencers.

Chris Hemsworth is f****** cool and he is the face of Hugo Boss from fragrances to clothing, doubtless you’ll have seen the ads. As part of this multi-million $ campaign they made a great piece of youtube content – Hemsworth surfing in a Boss suit.

This was picked up by several influencers who undertook various stunts wearing suits for the #suit challenge.  Great idea, but Boss drove all this and spent a lot.

Now let’s illustrate this with a fictional example for a brand with little money.

Imagine you’re a new kind of energy-giving and sustainable breakfast granola. Target market is the yoga set, students, environmentally aware, health food conscious mums etc. So you know how to pick your influencers. You have an overall creative platform that this new granola is just one small thing you can do to make a difference to your life and the world. Every day is a fresh opportunity to try something new. That kind of thing. Specifically, you want to create brand saliency around breakfast and the start of a great new day.

So line up your army of micro-sponsors, give them bucket loads of free product to give away and free rein to produce the content they want but with the general idea of ‘#how did your day turn out?’

Say I’m a sports science student & influencer producing content on keep fit ideas. I get a few mates on the morning of exams, give them the granola, my unique warm-up exercises and then track what happens, did they do well, how did they feel later in the day.

That took me 5 minutes. Maybe not the best idea but you get the point. Pulling the influencer lever is not that hard and it’s a lot cheaper.

The only limits to your influence are the limits of your imagination and ambition, even if you have virtually no budget.