We have been doing some very exciting stuff. Of course, we love prototyping. It is an important part of innovation and we just cannot wait to try new stuff! Lucky for us, we had an intern who felt exactly the same way. His name is Willem Duijvelshoff and he researched use-cases and ethic of chatbots on our very own social intranet platform Plek.
The goal of this research was to come up with different ways chatbots could be useful on Plek, whilst addressing potential new privacy issues. In his first weeks at ILUMY, Willem organized a workshop to generate use-cases. With the guidance and assistance of the Plek team, Willem then designed a chatbot and prototyped it on the Plek platform. He measured its effects on employees’ engagement with their work and evaluated privacy attitudes of the participants.
Ideas for chatbots on Plek
Willem and the Plek team came up with the following ideas for chatbots on Plek:
A bot can help new employees set up their working environment. It can send them introductory documents and do an introduction, and it could even check if an employee has actually read the presented information. This introduces the dilemma of whether an employer should be able to go this far in checking their employees.
Employee Engagement Bot
A bot can inquire how employees are feeling at intervals throughout the day, in order to obtain an overview of the sentiment, morale and energy levels within the organization. It is essential to carefully interpret this information. Does an employee think it damages his/her reputation if he/she submits a negative response? Is not participating in the survey a negative response in itself?
Augmented Group Chat Bot
A bot can assist a group chat by indicating the topic of the conversation or by proposing to add documents or links based on the context. An atmosphere of being monitored has to be prevented to make sure employees do not get discouraged to speak freely.
Security Reminder Bot
A bot can make recommendations or give reminders about the company’s security policy, such as links or shortcuts to update their settings and take safety measures. However it is unclear to what extent this form of personalization would be accepted by users.
screen shot 2018 05 08 at 15 05 53
Laws of bot ethics
The overarching insight from this research is that the perception of privacy while chatting with a bot is a prerequisite for the feasibility of a use-case. Willem and Plek team distinguished several core values that have to be kept in mind when designing bots, and they translated these values into concrete chatbot features.
When starting an interaction a chatbot should indicate that it is a bot, or this should be clearly stated elsewhere. A bot should explain how the user’s data is collected, who owns the data, for which parties it is visible and how it is interpreted. A user should be able to request the data the bot has via a “What do you know about me?” feature. The bot should also offer the user to modify or delete his or her data.
When opening the interaction with users, it should also be immediately clear what the bot can do for the user. This helps set expectations and guides users to make decisions. The user can independently decide if the data the chatbot asks for – or implicitly collects – is worth the interaction.
A user should regard a bot as useful before he/she is willing to share personal information about sensitive themes such as engagement at work. Trust in a bot is built over time so, if a bot aims to request sensitive personal information, it should build up the conversation before the user trusts it with sensitive information.
An opt-out option should be available. Users should be free to decide if they are willing to take part in bot interactions in the workplace or in any other environment. They should not be obliged to take part in any chatbot conversation by higher authorities.
If users decide to use the bot, they should have the option of personalizing its uses. Based on the user studies, personalization increases the chances of engagement with the bot. However if a user prefers a generic bot that does not learn based on their behavior this should also be allowed.