On a first stage my role was, along with the PM (André Albuquerque) and the CPO (Daniel Eiba), to define the direction, viability and overall plan to develop a system to automatically manage most of the student-landlord interaction within the Uniplaces platform. After interviews were done with users and support teams a communication tool that would answer the identified three stages of user interaction with Uniplaces started to form as the logical solution. Making the correlation with data coming from analytics this perspective showed to be the best approach. Data also allowed us to make some safe assumptions regarding prioritisation of the features. One a second stage my role was then to, along with the PM, to work with and coordinate the development team. I facilitated the team effort in achieving the right experience to the user as well as designing the interface, from interaction to animations.
Create the best and most effective solution for a student to find the place he wants.
On the business goals side to create a solution that can deal with the most common user interaction patterns thus allowing support teams to focus only on solving very specific and more complex problems. On the user side to make it easier and quicker to find the right place by way of an effective and fun to use tool.
Research was a key part of the project. Not only was it a big focus on early stages, but it remain as a constant parallel process running along the all development process. Interviews with users and support team along with data compiled from analytics was constantly discussed and compiled in workshops designed to digest and make sense of the data, using it to steer direction and inform as much as possible all the design and development decisions.
Since we were already using a Google Ventures Design Sprint inspired approach in some stages of the project it made sense to have part of the team attend a mobile focused Google Masterclass in London that helped us further understand how to map and identify key opportunities in the user journey of the platform. It was, not only a good opportunity to find out about Google Home (hence the cartoon I did there), but also an opportunity to use one of Google's methodology to quickly bring innovative technologies into the project.
Multiple sessions and workshops, facilitated by me and the PM (André Albuquerque), were done afterwards, bringing in the development team and occasionally some main stakeholders, as well as the design team, with the objective of defining the user flows and interaction maps.
Multiple discussions happened in a very organic way along the project to define how "Uni" should be called and how it should look. After a direction and personality was finally agreed upon a document was created with the guidelines for behaviour, animation and look (i.e. it had to incorporate the Uniplaces logo somehow). This document was also heavily based on research on other existing bots that allowed us to make sure the bot would not only act "naturally" but also would not commit the same mistakes others had done. Based on this I started researching the look of hundreds of other bots and started sketching and asking for the team to vote on the ones they fancied the most and believed would fit the guidelines (the blue dots represent the votes done by the team members).
Featured below are some examples of some quick early iterations in this project. The first stages are usually focused on translating the key features defined in the specs of the project and making sure everyone becomes aligned with a clear vision (or potential visions) before diving into more detail and high fidelity, which usually involves a much bigger time and emotional investment that becomes more complicated and costly to change if needed. After the first very rough and quick sketches/doodles I still like to explore some almost kind of "mid fidelity" hand drawn wireframes. I feel I can move faster than I would if I'd jump into digital at this level and probably not get lost in details as it happens so many times when working in i.e. Sketch or Photoshop.
There is, for me, another two big advantages in keeping this hand drawn and physical. One is that with the right context it is possible to make the rest of the team feel comfortable enough to also sketch and contribute with their ideas. The other is that this stuff can stay in the walls, specially if a "war room" has been set for the project, making it easier for anyone to use them as reference and also for any new comers to the project to understand a bit more where some later design decisions came from.
It seems the physicality of paper, post its and whatnot can reduce the fear of failing because they are low cost, easy and quick to redo and the expectations towards "visual quality" are reduced when compared with something digital. I think they tend to be more fun as well, of course. And when you are having fun the fear of other people's opinion dissipates a bit more thus making way to a more creative environment.
A lot of explorations were obviously done continuously to the interface of the chatbot before it even went live for the first time. This showcases just a part of them, essentially around the header, the canned questions (close to the composer) an the background. Each one of this explorations was based on different assumptions that kept being discussed and tested until we got to a couple of versions that seemed to make sense to us, the platform design system and obviously the users.
Animation was a key part of the interaction for this experience. Specially since many of the features are still not UX patterns a certain level of education regarding affordance, focus and system status was required and animation was the best, more natural and transparent way to teach the user.
An animation I put together to showcase and explain canned questions related animations to the team. This short animations were usually just the first step in the process to get to the "final" version. A lot of back and forth, conversations and discussions were done afterwards around the prototype itself. A lot of the final stages fine tunning was actually done, for most of it, in "real time". I would sit for days and hours on the same desks as the development team so we could provide feedback to each others regarding the assets, the animation, the sizes and behaviours in a truly highly iterative Lean approach.
A UI Kit was created in Sketch that allowed the quick construction of screens and also making it easier to export assets. But this was far from a closed document. It was a very organic document that kept being changed and updated. Sometimes, before jumping into implementation mockups would still be generated for the team and stakeholders to discuss them.
Right from the start one of the main tenets of the platform, absolute responsive behaviour, was set as a basic, unquestionable characteristic of the feature. As such it can be used in mobile devices, tablets and obviously desktop.
A look into the several analytics tools allowed the team to understand that the users were almost evenly distributed between desktop and mobile. As such neither platform was considered more important (well, mobile was a bit more since it was a safe assumption, based on the data, that access from mobile devices would take over the desktop ones eventually.
Sometimes challenges arise not only because of lack of real estate but also due to too much real estate. In some desktop screens with huge resolutions some tweaks seemed necessary to create a visually balanced experienced (i.e. in some early user tests with big screens users would simply not see the chatbot CTA). A lot of fine tunning and QA using multiple devices had to be done to make sure the experience was consistent in multiple devices and resolutions. But, it is important to mention, this was always done focused on what resolution was more common for most of the users according to the analytics.
The Uniplaces chatbot went live quite quickly, from just an idea, a concept, to something tangible and real, specially considering how small the team was. Although part of a bigger strategy, of course, the actual chat bot "MVP" took only less than three months to go live for the first time. That means a constant back and forth with setting up the frontend and the UI and making it fit within the platform with a responsive behavior, with configuring the incredibly complex logic behind it, with diving into machine learning, with making everything integrated with the backend and, at the end of the day, making sure the experience fits the user needs and feedback as well as the business needs, constraints, expectations and consequent metrics. It was a lot of hard work, but always a lot of fun. Specially because the Product Team at @Uniplaces always seems to keep an incredible staunch tireless strong-willed profile. It was indeed a pleasure working with them on this project :)