Now I have completed a lot of research and completed competitor analysis, user personas, empathy maps and a customer journey map I feel I am ready to start to map out my user flow. I need to consider how my user will journey through my app in practice and in what order they should work their way from the start of the flow to the finish. As this app is designed to be used with refugees I will have to consider the flow from their perspective as well and if they do not engage and feel comfortable with the product it will not be serving the purpose it is designed for. This will be my key consideration but I also need to consider ease of use, the basic logic of how each step flows into another and what may be happening around my users at the time. Fitness for purpose and use in a difficult situation are all key points within my design.
As this is a complicated product involving a large number of ideas I decided to start creating my user flow on paper as we discussed in Week 4’s lecture paper offered me a cheap and easy way to discover lots of different options before deciding on which ideas I will take forward to the next stage.
I started by creating some overview user flows looking at my whole idea, from start to finish. I wanted to give myself an idea of where everything might go and what way I could look at my idea as a whole before going into more detail.


While I was able to get all the major parts of my idea into this flow it wasn’t pretty and I realised that to cover the two main functions of my app I would need to create two separate user flows, one to record a person’s details and one to complete a basic health check. These would be then accessed from a central home page.
The first step would be to work out what language the app needed to translate, this could be done by selecting a language if you knew it was Arabic for example, or through the app listening to the person speaking and AI identifying the language. I intend to use AI as my translator and this to be a speech-to-speech translation that makes sense in the overall app.
After that it is a challenge for me and in most cases it would seem the most logical to take a person’s details next however as refugees might be frightened and not want to give their details it might be better to do the health check first and build trust. While I explored setting the user flow up with both taking details and the health check first I feel the person in the situation should make this decision on a person-by-person basis and allow the user on the ground to make the decision rather than forcing one-way or the other on them within the design. It would also allow users to carry out one of these tasks without the other if a person did not want to either give details or have a health check.
With these learnings onboard I now need to explore the separate user flows and work them out in isolation before deciding the best way to provide access to both.
The first user flow I wanted to work on was the user flow for the language recognition feature as I believe this will be the first feature my users will want to use. The main idea of this feature is that if a person speaks to the device when the feature is in use, AI within the app will recognise the language and tell the user what language it is. For my app in the background if the language is correct (confirmed by a user on screen) it will set the rest of the user flows to display in that language.

Another user flow I want to map is to use the translation feature, while in other parts of the app, translation will be necessary for those that will be implemented as part of that user flow. In this flow, I am looking at the feature that allows a two-way conversation to take place between two people speaking different languages. The feature is designed as a speech-to-speech translator however it will also allow users to type in text to be translated and display the spoken words on screen as it picks them up. Other translation apps do this and I believe it will be useful for my users to see that what they are saying is being picked up accurately before it is translated, I hope this will avoid any mistranslations and users will be able to tap on the screen if something has been picked up wrong and be able to correct by typing or re-recording the piece.

