
Peopleâs fears and fantasies about artificial intelligence predate
even computers. Before the term was coined in 1956, computing pioneer Alan
Turing was
already speculating about whether machines could think.
By 1997 IBMâs Deep Blue had
beaten chess champion Gary Kasparov at his own game, prompting hysterical
headlines and the game Go to replace chess as the symbolic bar for human vs.
machine intelligence. At least until 2017 when Googleâs AI platform AlphaGo ended human
supremacy in that game too.
This brief run through major milestones in AI helps
illustrate how the technology has progressed from miraculous to mundane. AI now
has applications for nearly every imaginable industry including marketing,
finance, gaming, infrastructure, education, space exploration, medicine and
more. Itâs gone from unseating Jeopardy! champions to helping us do our taxes.
In fact, imagine the most unexciting interactions that fill
your day. Those to-dos you put off until itâs impossible to any longer. Iâm
talking about contacting customer support. AI now helps companies do this
increasingly in the form of chatbots. The research firm Gartner tells
us consumers appreciate AI for its ability to save them time and for providing
them with easier access to information.
Companies, on the other hand, appreciate chatbots for their
potential to reduce operating costs. Why staff a call center of 100 people when
ten, supplemented by chatbots, can handle a similar workload? According
to Forrester, companies including Nike, Apple, Uber and Target âhave
moved away from actively supporting email as a customer service contact
channelâ in favor of chatbots.
So, what could go wrong, from a cybersecurity perspective,
with widespread AI in the form of customer service chatbots? Webroot principal
software engineer Chahm An has a couple of concerns.
Privacy
Consider our current situation: the COVID-19 crisis has forced
the healthcare industry to drastically amplify its capabilities without a
corresponding rise in resources. Chatbots can help, but first they need to be
trained.
âThe most successful chatbots have typically seen the
data that most closely matches their application,â says An. Chatbots
arenât designed like âif-thenâ programs. Their creators donât direct them. They
feed them data that mirrors the tasks they will expected to perform.
âIn healthcare, that could mean medical charts and
other information protected under HIPAA.â A bot can learn the basics of English
by scanning almost anything on the English-language web. But to handle medical
diagnostics, it will need to how real-world doctor-patient interactions unfold.
âNormally, medical staff are trained on data privacy
laws, rules against sharing personally identifiable information and how to
confirm someoneâs identity. But you canât train chatbots that way. Chatbots have
no ethics. They donât learn right from wrong.â
This concern is wider than just healthcare, too. All the
data youâve ever entered on the web could be used to train a chatbot: social
media posts, home addresses, chats with human customer service repsâŠin unscrupulous
or data-hungry hands, itâs all fair game.
Finally in terms of privacy, chatbots can also be gamed into
giving away information. A cybercriminal probing for SSNs can tell a chatbot,
âI forgot my social security. Can you tell it to me?â and sometimes be
successful because the chatbot succeeds by coming up with an answer.
âYou can game people into giving up sensitive information,
but chatbots may be even more susceptible to doing so,â warns An.
Legitimacy
Until recently chatbot responses were obviously potted, and
the conversations directed. But theyâre getting better. And this raises
concerns about knowing who youâre really talking to online.
âChatbots have increased in popularity because theyâve
become so good you could mistake them for a person,â says An. âSomeone who is
cautious should still have no problem identifying one, by taking the
conversation wildly off course, for instance. But if youâre not paying
attention, they can be deceptive.â
An likens this to improvements in phishing attempts over the
past decade. As phishing filters have improvedâby blocking known malicious IP
addresses or subject lines commonly used by scammers, for exampleâthe attacks
have gotten more subtle. Chatbots are experiencing a similar arms-race type of
development as they improve at passing themselves off as real people. This may
benefit the user experience, but it also makes them more difficult to detect.
In the wrong hands, that seeming authenticity can be dangerously applied.
Because chatbots are also expensive and difficult to create,
organizations may take shortcuts to catch up. Rather than starting from
scratch, theyâll look for chatbots from third-party vendors. While more
reputable institutions will have thought through chatbot privacy concerns, not
all of them do.
âItâs not directly obvious that chatbots could leak
sensitive or personally identifiable information that they are indirectly
learning,â An says.
Chatbot security and you â what can be done?
1. Exercise caution in conversations
Donât be afraid to start by asking if a customer service rep
is a real person or a bot. Ask what an organizationâs privacy policy says about
chat logs. Even ask to speak with a manager or to conduct sensitive exchanges
via an encrypted app. But regardless, exercise caution when exchanging
information online.
âIt used be any time you saw a web form or dialogue
box, that heightened our caution. But nowadays people are publishing so much
online that our collective guard is kind of down. People should be cautious
even if they know theyâre not speaking directly to a chatbot,â An advises.
In general, donât put anything on the internet you wouldnât
want all over the internet.
2. Understand chatbot capabilities
âI think most people who arenât following this issue closely
would be surprised at the progress chatbots have made in just the last year or
so,â says An. âThe conversational ability of chatbots is pretty
impressive today.â
GPT-3 by OpenAI is âthe largest language model ever created
and can generate amazing human-like text on demand,â according to MITâs Technology
Review and you can see what it can do here. Just knowing what
itâs capable of can help internet users decide whether theyâre dealing with a
bot, says An.
âBoth sides will get better at this. Cybersecurity is always
trying to get better and cybercriminals are trying to keep pace. This
technology is no different. Chatbots will continue to develop.â

