The Future is Human – GenAI’s role within Communities of Practice (Part One)
The first in a series of posts I intend to write on how to use Generative AI in an effective and responsible way within community spaces
By now, I’m pretty sure we’ve all heard of Generative AI. If you haven’t, I’ve written about it previously and have been talking about it in one form or another pretty much constantly over the past twelve months. Its not simply because I am a technologist who loves technology – actually its not that at all – its because I am a human and I love human behaviour. And this is the struggle I have with GenAI right now….we all seem in such a rush to replace the things that make us human with tools driven by large language models that don’t appreciate understand or care for the intricacies of human interaction.
The most recent and most interesting example I’ve come across is an app called Butterflies. For those who haven’t heard of or tried it yet, try and stay with me as it’s a bit of an odd one. Essentially you (the human) create a persona (the AI) based off a series of prompts and expressed preferences. You (the human) then release your persona (the AI) off into a social network that looks a little bit like Instagram, where it posts images of itself for other humans and AIs to interact with, and in turn interacts with the posts of other AI personas.
I’m sure that many of you – like me – will think to yourselves why on earth would anybody want this?!
I don’t pretend to be cool or even be all that active on social media (I don’t have an Instagram, Facebook or whatever Twitter is called now), but if we look beyond the slightly unusual premise there’s actually an interesting business idea underneath all of this. If you are a celebrity or business or public figure, you may have millions of fans/customers/users who all want to interact with you. Clearly you can’t respond to every single person that reaches out to you, but if you could train an AI to behave and respond to them on your behalf AS YOU, then your reach goes way further, followers become more engaged with you and your brand (urgh), and are willing to spend more of their money on or with you. Don’t believe me? The pioneers over on OnlyFans have been doing a variation of this for months now, licencing their likenesses to generate AI content in their own image. In Asia there are KPop bands that have already done this in a modern day version of the ‘call now to speak to a celebrity’ phone lines that cost £4.00 a minute back in the nineties.
The next step of this principle is even more useful. What if you – a non-celebrity – were able to train an AI on you and your behaviours to act on your behalf? To go to all those meetings that you’re not interested in going to, or provide updates to stakeholders when you’d much rather be getting on with your job? Suddenly hours and hours of your precious time have been freed up without anyone noticing, leaving you to focus on the things you actually want to spend your time on. Life and work becomes fulfilling! Praise our new AI avatars!
Yes, but also no
We will leave aside the glaring caveats that AI is technology and all technology is inherently insecure, and we will leave aside the reality that GenAI tooling is not currently at the required level and (although the pace of progress has been remarkable) there’s no guarantee that GenAI in its current form can or will ever achieve the levels of cognisance and understanding required for this meeting free utopia. What troubles me is that in doing so we would be choosing to replace and automate the things that make us unique and special and exceptional, for really no good reason other than because the technology exists. This does not feel like a responsible or effective use of GenAI tooling, and it becomes even worse when you apply that thinking to communities of practice. On the surface there is an argument why it would make sense - why shouldn’t a Community Leader have an AI persona to respond any time of day to any member who needs help or wants support? Why shouldn’t a Community Member have an AI persona who is able to be there instantly when someone reaches out to a knowledge community for help with a query that the human knows the answer to, but is busy at her child’s sports day or is cooking his family dinner?
I will adapt an anecdote I find myself coming back to repeatedly when considering whether something is a good or bad use of GenAI….
Do I care that when I go on a website, it is an automated script that responds with a pop up welcoming me and not a human? No – it is inconsequential to me and most of the times I don’t read it
Do I care that when I join a new Slack group or online community, I am welcomed by an automated email and not a bespoke hand-typed message thanking me for joining and that they’re glad to have me? No – I understand and that this is a nicety and doesn’t mean that much to me
Do I care if the spam email I receive asking if I would be interested in joining their event/buying something on sale/checking out their job listings board is written by an AI and not someone specifically thinking of me? No – I know this interaction is not bespoke and I am not special when I receive this email
What about if its an email from my boss who used an AI to ask write and send an email asking me to work late, or represent him at a meeting? Well hang on, I’m not sure how I feel about…
What if my HR Manager has told their GenAI persona to send me an email advising me that due to poor performance or budget cuts I am going to be fired? Well no, that seems cold and I want some human involvement….
What if I join a Community of Practice that is important and actually means something to me, only to find that the Community Leader and two or three members have chosen to replace themselves in the ‘meeting’ with their AI personas, as they consider something else (rightly or wrongly) to be more important? Well…..if it’s not important to them, why should it be important to me?
We all understand that a painting is much more valuable than a poster of the same painting. There is value beyond the material, and that is the same for human interaction. It’s not enough to be able to just get the same output if we do so without the context – its not the same. People join Communities of Practice as they are seeking other like-minded people who care about the same things that they do, and if all they find in your community is AI personas, then they’ll keep searching until they find what they’re looking elsewhere.
I’m not saying that GenAI tooling has no place in your Community of Practice. It does. I use it every day, and it helps me do more than I could without it, and I’ll be writing more about how I do that over the coming weeks. What I am saying though is that just because we technically CAN do something does not mean that we HAVE to do something. We have free will. It’s what makes us human and wonderful. We have emotions and we feel, and we cause emotions and feelings in others. Its what makes us exceptional. Why would we voluntarily choose to devalue and give up on those things? I encourage anyone and everyone reading this to consider not whether its faster to automate that task with AI, but whether doing so improves our experience as humans because if that’s not the goal, what is?
Thanks so much for reading, if you’ve enjoyed this post I’d really appreciate it if you could share it - alternatively you could always buy me a coffee :)