A few weeks ago, I had my first customer support experience of the future. I was in a meeting when my Android’s caller ID told me American Express was calling. I stepped of the conference room and answered the call. A machine-generated woman’s voice identified itself as the American Express fraud department. “Do you have a bluetooth headset or headphones you can use with your phone?” she asked.
I replied that yes, I did. She said to tell her when I was ready. So I plugged in my headphones and said, “Ready.” Then she asked if she could send me a link to a website by SMS. “Yes.” The message came and I clicked on the link.
The browser rendered an American Express page that asked me to log in. She prompted, “Please log into your account.” I signed in. Then she began to explain that the fraud department had identified two transactions as potentially suspicious. She said, “those two transactions are listed on your phone. Could you examine them and mark each one as fraudulent or not.”
I clicked into each one and as I did, the voice began to read the key highlights of the screen. The date of the transaction. The vendor’s name. When I clicked Not Fraudulent, she said something to the effect of “let’s review the next one.” After I had indicated both transactions were mine, she said “Thank you, and have a nice day.” It was truly remarkable.
Why did it feel so different than a traditional customer support call? First, because the bot on the other end of the line was on the same screen as I was. She instantly “knew” where I was clicking and what the right next step would be. She guided me at every turn. The robot understood the context, in contrast to a typical support call with requires several frustrating minutes to get on the same page as a customer service rep:
“What is your account number?” “What is your security word?” “What seems to be the problem?” “I’m sorry I don’t quite understand what you’re asking for.” And so on.
Second, the software presented me with all the information I needed to quickly make a decision. Without the screen, I would have waited much longer to hear the details spoken and answer the questions about the potentially fraudulent charges. It took less than 2 minutes to complete.
Third, there’s something about someone explaining a new situation to you that accelerates comprehension. If I had received an email from American Express pointing me to the same webpage with the information about the payments, I could have managed just fine. But there was someone explaining, taking the time to ensure I understood, while moving things along. I felt very well taken care of - cossetted and gratified - which is the hallmark of great service. Even if that customer service rep was a robot.
For outbound support calls like fraud verification, this system works brilliantly. The domain of my answers is narrow. I’m not going to stump the machine with a random question. Also, the computer drives the agenda and there’s a goal of the call that the system is cajoling me to complete, which is known a priori.
Inbound support calls are a much harder technical problem because a computer most likely will know much less about the reason for my call. I could imagine a similar system working for inbound calls if there were a way to transmit my context (which page I’m looking at, what problem I’m facing) and then having a robot walk me through the steps to fix it. That’s an interesting, unsolved user experience and engineering problem, and a big opportunity.
This type of multi-channel customer support call, with data flowing both through audio and IP, both through the telephone and across a screen, is the future of customer support. More data establishes a shared context rapidly a quicker answer and more satisfied customers.