Skip to main menu
Skip to content
There’s something special about New York City. And in the case of The Container Store, there is something special about shopping in New York City.
The Manhattan location had been testing out an alternative to pushing around a shopping cart: customers could instead check out a handheld scanner from a kiosk at the front of the store. By scanning barcodes off of the shelf tags, customers could create a virtual shopping cart. After shopping, they would bring the scanner back to the kiosk, pay for their “cart”, and have their items delivered, usually for free.
The hardware for this shopping experience had been a traditional scanner similar to what might be used to walk around a store and create a gift registry. There’s a trigger to engage the scanner and a small screen to show you what you’ve scanned.
Ever looking forward, The Container Store wanted to improve this experience by taking advantage of the familiarity of touchscreen smartphones and discontinue use of the traditional scanners. This would give customers a much richer shopping experience, and a more familiar one.
I began by studying the existing scanner screens and meeting with stakeholders to define exactly what the application was expected to do. Having gathered this information, I created rough screen designs and linked them up in InVision. We came up with two different navigation models for the app: one where the scan button and cart were on the same screen, and one where the the user had to navigate to the cart. I was interested both in how customers and the client felt about seeing the cart at all times — which includes prices and a subtotal — versus having to navigate somewhere else to see this info.
With predetermined workflows hooked up in InVision, we tested both versions of the app with 10 participants. A large printout of both flows were put up on the observation room walls so that the client could post notes and observations as testing progressed. We walked our participants through a mock shopping trip by having them fake-scan select items we had set up on the testing room. A series of follow-up questions and ratings for key steps helped us gather feedback. After each session we regrouped and discussed the challenges the participant faced and the changes we thought we should make.
After the first day of testing there was enough information to narrow down to one user flow . This allowed us to play with various details, gather feedback, and refine at a lower level.
The sessions were great. We were able to bat around ideas and make more focused decisions as a team by watching people use the mockups. There were plenty of interesting discussions that shaped the end result.
The final design ended up drawing primarily from the version of the app that treated the cart as the app’s “home” screen. Though no one really had any issues with having to navigate away to view it, it felt like the most straightforward option: start with an empty cart, scan an item, see the item added to the cart. Repeat. It also matched up best with some of the recent hardware decisions the client had made around the scanner cradle that would be used.
My last recommendation after handing off design assets to the client was that a second testing session should be scheduled once the app had been built out and integrated with the scanner cradle.