27 Jul Chatbots and AI: Our Friend or Foe?
Like everyone else with a social media feed, we’ve been bombarded with articles, how-to’s and did-you-knows revolving around OpenAI and why the technology is an absolute MUST to maximise output and efficiency.
Admittedly, over the last few months, our curiosity led us to asking experimental questions on the various AI platforms to see what all the hype was about, and we weren’t unimpressed with the results. Having an email-drafting buddy available at our fingertips, winning us back valuable time which could be better dedicated elsewhere was most definitely a tempting thought.
But it did leave us contemplating; if chatbots learn more about us and our business the more we ask questions, where is this information being stored and will it be used to answer other inquirers questions?
According to OpenAI’s Help Page, every piece of data, including confidential customer data, trade specific information (secrets), and sensitive business information you feed the chatbot is liable to be considered for use by trainers to improve the system.
We’ve had numerous discussions with our network, who largely operate in the world of projects, about how they’ve tapped into OpenAI, and seemingly, it’s mostly being used to assist with research, copywriting, and proofreading. We’d however still advise that it be utilised with a great deal of caution and care.
Sensitive information around the launch of a new product or service in the hands of a competitor company could be detrimental.
Data Leaks, Employment Contracts, and Inaccuracies
A data leak isn’t the only risk when feeding OpenAI with company specific information; there are legal consequences to be considered as well. Many companies have stringent data protection regulations, and employees posting certain information on ChatGPT might be in breach of their employment contracts.
Then there is also the topic of accuracy and how often OpenAI gets it wrong. While the chatbot might be capable of drafting well structed essays and lengthier emails, the information it shares can also be highly inaccurate.
An example of this, is that it will tell you that if one woman can produce one baby in nine months, nine women can produce one baby in one month.
A glimmer of hope?
Google recently announced that they would be facilitating a machine “unlearning” competition, with the goal of scrubbing sensitive information from AI systems. Machine unlearning would make it possible for data to be removed from an algorithm and ensure that no one else profits from it.
In the interest of data privacy rights and ensuring compliance with global data regulation standards, this is a responsible and redeeming step, but also one which is sure to come with its own set of challenges.
Neither a Friend nor a Foe
At this stage, AI is more of an acquaintance than a friend or foe. We have met, but we do not yet know it well enough to trust it completely.
AI can be a powerful tool and it has the potential to revolutionise many industries. However, it is important to remember that it’s still in its early stages of development, and we do not yet fully understand its potential impact. It’s important to approach AI with caution and to carefully consider the risks and benefits before integrating it into our day-to-day operations.
- In a new competition, Google wants to crack machine unlearning (qz.com)
- Announcing the first Machine Unlearning Challenge – Google Research Blog (googleblog.com)
- Security (openai.com)
- ChatGPT can tell jokes, even write articles. But only humans can detect its fluent bullshit | Kenan Malik | The Guardian