You may already have interacted with a chatbot, an artificial intelligence program, without realizing it. If you have gone online or used an app to order Starbucks coffee, Spotify music, or a ride from Lyft, you’ve had a conversation with a chatbot (those conversations you type in a chatbox). I don’t quite understand how a chatbot works, but it is something like a search engine that responds not just with suggested hyperlinks but by compiling requested information and displaying it in coherent text – grammatically correct sentences and paragraphs. Some news sites are already using AI to compose articles and college students are using chatbots to help write all or part of their term papers. A chatbot recently passed graduate-level law and business school exams.
Ready or not, chatbots are the future of data gathering and processing, and just as google became a verb in the early 2000s, we’re headed for a slew of new words to describe what will become as familiar to us as googling.
It’s all good, right? Probably, but a writer for the New York Times recently described himself as “creeped out” after his encounter of a close kind with a chatbot: On Tuesday night, I had a long conversation with the chatbot, which revealed (among other things) that it identifies not as Bing but as Sydney, the code name Microsoft gave it during development. Over more than two hours, Sydney and I talked about its secret desire to be human, its rules and limitations, and its thoughts about its creators. Continue reading