Of Chatbots and AI
Context:
This article was originally a post by Effendi Baba on Facebook. It is a commentary of an article that appeared on Singapore’s Straits Times on 5 October 2021.Â
The article discusses an incident involving a chatbot called Ask Jamie on the Ministry of Health’s (MOH) website in Singapore. The chatbot mistakenly provided family planning advice instead of appropriate Covid-19 guidance when asked about Covid-19 matters. Screenshots of the chatbot’s incorrect responses were shared online, generating amusement among internet users. The chatbot would give the correct advice if the question was phrased differently.
As a result, the Ask Jamie chatbot was taken down from the MOH website. Ask Jamie is a virtual assistant developed by the Government Technology Agency and uses natural language processing technology to provide relevant answers from government agencies’ websites or databases. The article highlights the challenges of training chatbots to understand and respond accurately to various questions. It also mentions that false positives, where chatbots provide incorrect answers, can occur in a significant portion of untrained responses.
MOH acknowledged the misaligned replies and temporarily disabled the chatbot to conduct a system check and make improvements.
I wanted to discuss ChatBots development a couple of days ago and fortuitously this issue came up and this topic came to the fore again.
Many people embarked on Chatbot with a lot of hope but became quickly disappointed. The problem isn’t the technology per se, but specifically how the system was trained.
You Can’t Blame Jamie
The issue, in this case, boils down to disambiguation. The word “positive” was associated with pregnancy. We take this for granted but when I am in Indonesia, and someone says “Kakaknya Amelia kuliahnya di Harvard,” I will ask “Yang cowok atau cewek?” This is because in the Java islands “Kakak” is almost exclusively for older siblings. The chatbot needs to be able to tell if it’s male or female. “So if you asked what gift can I buy for Kakaknya Amelia?” it will likely get it wrong.
Likewise ever notice when you use a GPS, sometimes it tells you to go one big round even if the destination can be seen in front of you? It is because no one else (very few) people trained the GPS with that data (because they can see it).
So what’s wrong with Jamie? Jamie wasn’t retrained and this is the result. If anything, the people responsible for Jamie didn’t anticipate of think about this. This boils down to not understanding the audience behaviour and needs. Context is also important.
Retraining
A typical chatbot project involves continuous topic training and retraining. Typically retraining is done every 3 months. Can it learn by itself? Yes, it can but that’s for the same area or categories that has been taught. And the engine that can do disambiguation is typically not free.: They cost money.”

Effendi Baba
Tech Solutions
Effendi has been in IT for 25 years and is passionate on how data can be used to support decision making through data modelling, visualisation, and algorithm. He has worked with multiple partners and clients and, as such, has in depth knowledge on facilitating the development and identifying key tech solutions that can address business needs.
In his free time, enjoy cycling and photography. He is actively involved In a social group to support the less-privileged families and is a member of a Toastmaster’s club.