In late November 2022, OpenAI (backed by Microsoft) debuted ChatGPT, an AI chatbot that can understand human language, engage in human conversation, and automate basic tasks. Within a few days, the chatbot had over a million users. Now, a few months later, it has over 100 million.
Google has also since announced the development of its own chatbot, Bard. Based on existing daily users, experts estimate Bard, when officially launched, could reach a billion users within two months.
AI chatbots aren’t isolated to tech giants, either. Businesses around the world are already using them for things like customer service, marketing opportunities, HR tasks, and managing expenses.
Although employing chatbots for transactions and conversations isn’t new to certain enterprises, as these technologies become more prevalent and more public, more and more variables are being introduced to the ways in which they’re used—or misused—and how they can impact the world off-screen.
While some of these new uses seem fairly harmless (like creating silly songs and pun-filled jokes) and even practically useful (like drafting lecture notes or providing 24/7 support), chatbots are already on record producing factual errors, distorted information, emotional manipulation, and even hate speech. Ultimately these chatbots are based on language models that humans develop and are trained via vast amounts of data humans have generated over time. So, how can we maximize their value while minimizing their risk to do more harm than good?
The answer is simple, if not simple in practice: knowledge sharing. As more capital drives this innovation from behind the scenes, and more users drive its societal impact from behind closed doors, transparency in chatbot training, evaluation, and performance will become more critical than ever for creating safe, unbiased, ethically sound deployments.