User feedback showed that the experience of managing videos in the library could be better, finding a specific video is difficult (i.e. if you’re looking for a golf swing from two weeks ago, you might be scrolling through lots of video to find that specific one) and there’s no notion of separating videos into groups or teams.
Although some data regarding user feedback was provided previously watching real users interact with the app, as well as interviewing them, provided the possibility of identifying different insights as well as slightly reducing the interpretation factors. The tests were done remotely and my control was not very effective, but the idea was just to be able to see real users and the app in action. Three generic tasks were provided:
Another set of users were asked to interact with the app. Once again it as asked to perform three main tasks. The third task was adapted though, to something more specific in order to test my assumptions on very specific interaction (to which features would users resort to find specific content?)
I also did an Heuristic Evaluation based on Jakob Nielsen's 10 Usability Heuristics, so that from such analysis I could identify a list of potential usability issues by myself and compare with the user research results.
Again three tasks were assigned. And although I didn’t had time to have a proper setup (to record the users face or voice) I did record the screen. The following is a glance of one of the user tests.