I was the IxD and UI designer. Along with a technical project manager and UX analyst, we were the team hired to design the app. On the client side, we interacted with an industrial designer (leader), marketing and customer service teams as well as technology agencies working for the company.
4 design sprints
3 rounds of usability testing
The company recognized an opportunity in building the first wifi controlled washing machine of the country. They have multiple brands and planned to present this product as a top model.
Research & analysis, what we are building and for whom
The company provided the report of the ethnography study they did. In addition, we run informal interviews with woman and men that would be both technological and environmental caring persons, ages between 25 and 45 years old. We talked to families, couples and single people that washed in their buildings laundry rooms.
We confirmed that this is one of -if not the most- valued appliance of the home. The washing machine can be your best ally but can also send you straight back to the middle ager if it breaks down.
This product was going to be a wifi based solution that allows technological and environmental caring people to program a wash for their cloth remotely.
but…why would someone need an app to wash?
Understanding people needs
I started by breaking down the washing process and focused on understanding people needs in context to then, break them into smaller needs.
What are the actual steps to washing the laundry?
1. Before washing: sorting and loading.
The fine art of sorting is key to the success of the wash and turned out to be a whole world by itself. Interviews showed us a big percentage of people (mostly men) had very basic information when it came down to sorting their laundry, they knew the rinse cycle was probably a bad idea when it came to color cloth. Those who did sort them did it by mostly by these two categories: color (whites, darks, lights, jeans, and delicates) and fabric. Also, those more experienced had their own routines for dealing with stains while sorting.
Sorting and load weight are key factors to a good wash but also the use of the right amount of detergent and/or softeners. By sorting correctly you can choose an efficient cycle and accurate water temperature, but most of them didn´t read their appliances manuals.
How do we prevent error here so that the wash programming is successful? How can we improve the experience of those inexpert without making them feel stupid?
For the remote purpose of the app sorting and loading could be:
Someone else could do the sorting and/or loading for the user to wash remotely.
Done by the user at home. (Big value question at this point: Is this app going to be really so helpful that I would do everything BUT starting the wash directly from the appliance? This is a very common issue in IoT apps. Truth is if you do something in fewer steps interacting directly with the device and there´s nothing else there…chances are you won’t be using the app at all )
To explore further this situations I did some story boards with examples to communicate them with the team.
2. Actual wash
The company research showed us that when it came down to the buying decision people wanted it the washing machine to have as many cycles as possible but on a daily basis they used only up to 3 programs and forgot about the rest.
Why could this be? After looking at some of the manuals some cycle names were too similar and could lead to misunderstanding.
For example: What was the difference between ¨hand¨, and ¨wool¨ cycles? Turns out in ¨hand¨ you can centrifugate up to 800 rpm but in ¨wool¨ only up to 600rpm. Which one is quicker ¨mean half¨ or ¨quick¨? It turns out that ¨quick¨ was 45 min long and ¨half¨ (as the name points out) only 30¨ so… half was quicker than quick. These choices make people stop to think about the meaning when they really want to just wash.There was an opportunity for improvement here.
Together -with the client´s project team- we decided to include an open card sorting before the usability testing to show how people understood the cycle names, how they grouped them and even, how they would name them. This practice would let us verify what we were seeing and would provide the client with some baseline to identify the need for reviewing the programs names.
When it came down to the cycle wheel in the washing machine´s display, the complete list of possible programs is presented with the same hierarchy (For ex.¨Cotton¨ has the same hierarchy as ¨drain¨).
In the first design proposal, we tested the actual distribution of cycles to see how this worked for users.
Once the laundry is ready…who takes it off from the washing machine on time? Does it matter? Turns out finished loads should be removed immediately. If you let damp clothes sit in the machine you will soon smell it.
What if I could tell the machine to be ready just after I get home so I can be there to hang the cloth?. What if I´m coming late from work and the wash already finished?
I divided removing into three: direct (remove directly after wash), scheduled (programing when removing should happen) or rescheduled (leaving laundry ¨on float¨ until it could be removed ). And suddenly the app is showing more value.
Journeys to cover, wireframes to design
Our scope was the remote programing of a wash. For that we would cover:
I sketched and worked on use cases in paper and immediately placed in a common area for the team to see. This leads to many interesting conversations and created new doubts/questions/proposals to share with the client team.
More sketching came just to make sure we have an agreement on the proposal before increasing fidelity. For me, this usually means I can now move into digital wireframes.
We would focus on:
Task 1: Pairing device and initial set up
Task 2: Programing a remote wash
Task 3: End of the washing cycle alert
Task 4: Handling an errorthat required technical service
Parallel to wireframing, the testing script and details are also designed.
This app had a fundamental ally: the digital display of the washing machine. In the different journeys, this had different relevance but still was a key element proving feedback. Because of that, we were going to be designing the messages displayed there for the test. At the time, there was no actual washing machine available for testing so with the client offered an alternative wood prototype were we could fake the display. For me having the physical product was imperative for people to involve in the test and could show insights about how they expected to interact with the display even though this wash´t part of the test. It was an opportunity to observe, and the client not only understood this but made it possible.
Ideation, what if…
I kept thinking this appliance was an ally. So, what if it was one of your contacts? what if it was a person you could talk to?
¨I´m ready to wash¨ He lets me know if the weather is bad if there ´s a light cut at home if we are missing soap or how were we doing on savings ¨With this wash we saved xxx ltrs of water¨.
Humanizing the machine seemed unfair for users.
Prototyping, making ideas tangible
For the device pairing and initial set up, we talked to lots of stakeholders and relied on the study the company did to learn who performed the technical installation of the machine and who would eventually assist in the process. Also, what would the app role be during this stage?
For the end of cycle alert (task 3) we worked with default app notifications styles and for error handling (task 4) we looked for the different type of errors in the manuals that could be prevented or partially handled at home. We worked, reworked messages and graphics for both tasks until they felt ok to test.
For programing a remote wash (task 2) we prototyped the actual cycle hierarchy in the same order as they appeared in the washing machine display, keeping also the actual labels of cycles and adding only the duration time and 2 lines description of the cycle. By doing so we would be able to get baseline metrics to improve.
I used Illustrator to map all the flows and build the testing prototypes (final deliverables were required to be .psd files) I used Invision to share them with the client. (At that point -2014- Invision didn´t have a mobile app so the mobile format suited for sharing and obtaining feedback but was not suitable for testing. Also, design/development references needed to be added as comments so when actual feedback was given it clustered the screen. Now they had both issues covered.). The team developing the app was starting to get involved so I worked on flow maps to provide context and support to the Invision prototype while reviewing.
Turns out doing this again at this stage worked also as a checkpoint for myself. (Generating this type of deliverables at this point-previous to test- requires extra effort but pays off)
No matter what tool is used there´s always a call to present the designs updates. It doesn´t have to be long and it doesn´t require the client to provide you feedback right away. Usually, you walk them through the flows, back up decisions with rationals and get their first impressions. Creating small agendas for this callsand sending them in advance is something I also very efficient to handle team/client anxiety. Also helps to organize your speech while presenting and prevents you from forgetting something relevant.
From low to visual design
Just before the first test the washing machine finally arrived. It completely involved users and improved their perception and expectation (no one realized it was a wood prototype). We could see whether they were staring at the washing machine display for feedback or if they stayed there exploring options because they were´t just sure if they´ve gotten what they needed in the app. We tested in paper. I would have the computer role in the test providing the app screens and showing the feedback in the display.
Findings and solutions:
We did three rounds of tests and had 6 participants in each one of them.
When it came to -device pairing and initial set up (task 1) – the process required steps to be performed both on the machine´s display and on the app. In the first prototypes graphics of the display were included showing what was expected from users. What happened was that some users tried to interact with the example graphics themselves. To solve this I added a background image emulating the behavior the participant needed to perfom in the machine´s display. And removed the background when interaction in the app was required. Along with reworking the message and iterating the process steps these changes got us from 50% to 75 % percent of effectiveness and, 67% to 73% on efficiency.
When people started to program the wash they rarely changed temperature orspinning speed. When presented with the full cycles list with the same hierarchy as the display, they choose that one cycle they knew had worked out fine for them in the past with their current machine.
To encourage discovery and a more efficient wash, I reworked the programing splitting cycles into 3 groups: short cycles (30´ and 45´washes), specific cycles (softener, extra spinning, drain) and full cycles (cotton, synthetic, delicates, baby and sports).
To show the existing relationship between a cycle and its best temperature and spinning speed we needed some extra information over the laundry. Users did have did information from the sorting and loading previous phase but we were blind to this. Neither we could know- for this model at least- the laundry weight or if the amount of detergent was correct. So included some extra fields and asked people for color and dirt information. If it made sense for them and we did a good job using that information to suggest better choices, it would work.
It is not a matter of how many steps or options you are asking users to perform. It´s more about what makes sense at that time and how does it improve their overall experience.
So from the second test on, the interface would ask people to provide input on color and dirt. They weren´t actually in the display neither were mandatory steps to start washing but we could now suggest cycles, temperature, and rpm according to the laundry. For ex. If someone chooses ¨color¨, we would deactivate the rinse cycle below. If someone chooses ¨delicates¨ we would only show the available water temperatures and spinning speed. When possible we would even suggest a range for them.
Temperature orspinning speed was also rarely adjusted. Now knowing more about the laundry I could add value and educate users making them feel they had control back.
I replaced the ¨temperature¨ label with ¨water¨ and associated temperature numbers with cold, warm and hot ranges. A similar solution was presented to spin speed values showing dry, wet andfloat ranges.
Effectiveness of this task increased from 67 % to 75 %.
Different forms of feedback at different times.
Consisted of 32 .psd files for the tested tasks + 14 .psd files for extra tasks. Final visuals were for Android, xlarge.1080×1920 with scalable elements. Prototype with comments and interactions on Invision app.
More exploration, extra tasks
After the tests we explored some other flows. I included final visual proposals to the delivery for: Verifying wi-fi signal, Washes history, Washing assistant and ABM for device handling (website – desktop).
It seems that the first approach to IoT is to use mobile phones as remote controls. The challenge relies on making sure we are answering to a real need and achieving trust while the devices themselves get more intelligent.
What would I have done differently? From the 3 round, further testing with a prototype with interactions and some validations talking to the actual washing machine´s display would have been mandatory for me. Also, a follow up with the agency developing the app would have been a great addition.
The washing machine was released to market in June 2015: http://next.drean.com/