Social messaging app Snapchat has recently added chatbot functionality powered by ChatGPT. Considering many of its users are children and young people, is this a good idea?
For users of the standard, free version of the app, it’s currently not optional – the feature will appear at the top of your Friends feed, whether you want it to or not.
Snap – the company behind Snapchat – is clearly aware that there are potential dangers. The information page on its newest feature is upfront about the fact that My AI “may include biased, incorrect, harmful or misleading content” and suggests that users should independently verify any advice it gives before acting on it. (We all know how children just love to read product information pages, right?)
It also lets users know that My AI knows their location and that any data collected through it may be used to personalize and improve the service it provides.
To me, this raises a number of important questions. As with many other social apps like Facebook, users as young as 13 can sign up without the need for parental approval. Of course, it’s well-known that many younger children manage to access it simply by lying about their age when joining, and there are very few (if any) safeguards in place to stop this from happening.
ChatGPT is, of course, also available on the web for anyone to access, regardless of age. But making it a prominent feature of an app that’s widely used by youngsters every day to communicate with friends means, in my opinion, that we can’t overlook the safety implications specific to this new development.
Are AI Chatbots safe for children?
Firstly, as anyone who has been following the recent development of chatbots like ChatGPT and Bard knows only too well that to say they are a little prone to handing out misinformation is an understatement. As I’ve mentioned, Snap has tried to head off this criticism by stating that all information should be verified. But is it really likely that the average child or teenager is going to bother to do so? We all know that taking risks is a part of growing up, but if a chatbot gives out incorrect advice about actions or activities that might be unsafe, it could lead children into dangerous situations.
Another issue is privacy. My AI is open about the fact that it collects and stores information on users, but when those users are children, they might not always be capable of making the best decisions about what information is or isn’t safe to share with it.
There’s also a danger that chatbots can be used to engage in abusive or bullying behavior, for example, by creating bullying content that couldn’t easily be traced back to the person responsible for making it. Chatbots might enable a form of “bullying by proxy” because the bullies don’t feel they are responsible for the output of the bot, even if they’ve prompted its creation.
And as the My AI chatbot converses with users as if it’s a friend, we also have to consider that some children might choose to think of it as such. When in fact, it’s actually a piece of corporate software primarily designed to increase the time they spend engaging with the products and services belonging to its maker. In fact, when I briefly tried it out myself, it even went as far as to deny that it’s actually an AI and claimed to be a “regular person.” This seems somewhat hypocritical when Snap’s own guidelines state that users should always be honest when the content they generate is created by AI.
Any parent is also likely to be able to recognize that some children might find talking to My AI to be addictive. This could become a problem if it gets to a point where they prefer it to interacting with other humans.
These are all risks that everyone – particularly parents – need to be aware of with any technology. But to me, I can’t help but feel that the potential problems seem to loom larger when we’re talking about chatbot AI directly wired into an application as popular and prevalent among youngsters as Snapchat.
Hopefully, what I’ve written here won’t be taken as scaremongering. It’s important to acknowledge that AI has the potential to be a force for positive growth as well. Allowing young people to use and interact with it from an early age could help to prepare them for a future in which AI is going to play a prominent part in their lives. One technology-minded friend I was talking to recently pointed out that growing up today without learning how to effectively interact with AI would be like growing up in the seventies without learning how to use a calculator or growing up in the eighties or nineties without learning the basic functions of a personal computer. Or growing up in the noughties without learning how to search for information online.
It’s likely that as the children of today grow up into adults, it will become commonplace to use AI for schoolwork, hobbies, and eventually for the world of work. In fact, it will probably be totally normal for them to use AI for things we can’t even imagine right now. Learning how to interact with it now may be the same sort of right-of-passage that many of their parent’s generation (which includes me) experienced as we experimented with using computers and exploring the internet.
Nevertheless, it would be negligent to overlook the risks. As with all new technology, I think it’s important that parents keep a close eye on how their children react to this intriguing new friend and keep an eye out for signs that they could be becoming a bad influence.