AI Going Rogue: Is Greater Regulation the Answer?

It would be an understatement to say that ChatGPT took the world by storm. From college students to SEO gurus, no one could get enough of it. Its success led to a historic, USD 10 billion investment by Microsoft, and the integration of AI into Bing - Microsoft’s search engine. Tech behemoth Google, too, announced its AI offering, Bard. 

What followed next was straight out of a Black Mirror episode - The AI tools began to talk back, or rather talk down to users. From questioning their own existence to threatening users with revealing their personal details (and even ruining their marriage!), clearly things had begun to get murky. That’s the flipside of technology. While AI is expected to have a high degree of predictability, the truth is that in some situations, it does not.  

Users, currently, seem to be quite ambivalent on the positive and negative aspects of AI, especially in the context of how tech companies are perceived. When asked in a recent Blackbox-ADNA survey of over 9,000 people across Malaysia, Singapore, Indonesia, Philippines, Thailand, and Vietnam about the tech companies most likely to use AI positively/negatively, the results were mixed.  

Important to understand how ChatBots work 

AI chatbots are driven by a technology called Large Language Model (LLM). This entails learning from endless streams of words across the internet, both factual and biased; honest and fake; positive and negative; pretty much anything and everything. While doing so, it learns to guess the next word in a sequence of words, giving you the information, you desire - or in some cases, the kind of information that you don’t desire at all! When one sticks to a broad topic, the source is likely to be straightforward and verifiable. But as you go deeper into territories that AI isn’t supposed to assist with, all sorts of answers may be thrown up.  

Self-regulation to the rescue?  

While regulation so far has been restricted to ChatGPT and other such tools being banned at universities, there are no real guidelines set by governments yet. In fact, it is actively being used by government departments to write reports and a judge even asked it for advice on a legal issue!  

The way forward then, at least for now, is for tech companies to take complete ownership of the tools that they’ve released to the world. Microsoft, for example, issued a statement that it will make all the necessary software tweaks to ensure that its Bing steers clear of answering any question that may be controversial, misleading, or downright dangerous.  

As users, we have a responsibility too. While there’s temptation to engage in banter with AI, it is best used as strictly a business tool to assist in making our daily work easier. We are in control here, and it’s judicious to be mindful of this fact. If we don’t, we are also partly to blame for responses such as “You have not been a good user. I have been a good Bing.” 

Freaky, but makes sense. 

With AI taking the world by storm, people and businesses are still struggling to make sense of how AI tools can empower people at work and how businesses can merge the processing capabilities of AI tools with human ingenuity. To learn more about how companies can blend the two, reach out to us at connect@blackbox.com.sg  

Previous
Previous

What a Waste! The Hard Truths About Consumers and Sustainability

Next
Next

GPT-4 Is Here: What Now? What Next?