ITBusiness.ca

5 potential pitfalls of Facebook messenger’s new chat bot feature

If you’re trying to get on board with Facebook’s new chat bots, steer clear of Poncho the weather cat.

Facebook is beginning to roll out a new feature in its Messenger app – chat bots. The feature allows companies to interact with consumers on a very personal (yet automated) level, something businesses may consider using as part of a marketing or customer service strategy. As Facebook Messenger nears a billion users, chat bots are a way for brands to reach a wide user base without forcing them to download another third-party application.

Here’s a list of the top five potential problems that this new update could bring:

1. Negative user experience within the messenger app.

Facebook chatting is something usually associated with friends, and perhaps bots will make companies seem more approachable. That being said, the integration of bots runs the risk of ruining user experience within the application. Facebook is no stranger to user satisfaction issues. It wasn’t so long ago when games like Farmville began dominating news feeds. Facebook was forced to scale back the integration of games on the platform, and it seems that chat bots could create a similar situation.

It’s a questionable move for Facebook, as its entire business model is based on customer engagement and satisfaction. “The highly personal nature of the messaging environment means that unless brands are actually adding value to the consumer, it may cause users to move to less intrusive messaging platforms,” explained Hannah Giles, head of marketing at Zensend to Mobile Marketing.

2. Less efficient way of accomplishing simple tasks.

While it might be exciting to think about ordering a pizza through Facebook Messenger, it’s early days for this update, and chances are you might run into some trouble. While a major goal of the chat bots is to streamline the process of certain tasks, it’s possible they could have the opposite effect. A common issue with similar technology (Siri, Google Now) is that users find themselves continually changing the specifics of their requests so they can be understood.

One typo or variation in sentence structure can affect whether or not an Artificial Intelligence (AI) proxy will understand a request, and with chat bots, we may waste more time trying to explain ourselves.

3. Poor functionality of chat bots for the foreseeable future.

Since most chat bots are still in beta testing, there’s a good chance that it will be awhile before the technology behind them is rock solid. One bot, Hi Poncho, has already become infamous on the Internet. The animated cat is meant to tell you the weather, but it’s been noted to be significantly less effective than simply checking the weather app on your phone.

While it can be expected that developers will gradually improve the interface over time, it’s not certain how long that will take.

4. Further blurring the line between advertising and organic content.

Bots are not completely new to smartphone users, as both Siri and Google Now function similarly to the new messenger bots. One difference is that chat bots could be able to invade your messenger inbox without your specific consent. Facebook is considering allowing businesses to message your account anytime, as long as you have previously chatted with them. It is not clear how these messages would appear in messenger.

This kind of invasion could be a turnoff for users, as many are already sensitive to Facebook and Google’s targeted advertising. While it is not a reality yet, it is definitely a consideration when thinking about the outcome of this update.

5. Lack of consideration regarding ethical implications.

Last month, Microsoft debuted their own chat bot for Twitter named Tay. The outcome was described as showing the hidden dangers of AI, and the chat bot was taken offline just 24 hours after its launch. The chat bot was released on March 23, and was designed to learn from its engagement with other Twitter users. Internet ‘trolls’ took full advantage of the bot, and soon Tay’s account began promoting hateful rhetoric.

Microsoft failed to account for the fact that Tay would repeat anything she was requested to, resulting in a variety of racial slurs and offensive remarks.

Many of the other tweets are too inappropriate to publish.

 

Exit mobile version