Casa is a concept that explores future home automation with machine learning and predictive UI. It’s the fantasy of every IoT early-adopter, to have your home adapt to given context; time, place, schedule. In designing Casa, I explore designing for the moment,
We humans are creatures of habit and go about our work week in a surprisingly consistent fashion. When we go through these routines enough, they become habit. They switch over to System 1 and we don’t even have to think about it. How can we make smart home technology identify and adapt these routines to help the smart home experience grow more seamless and more personalized?
To begin, I started identifying some basic needs statements gathered from market research and my experience at Cenify, a small smart-home startup where I cut my teeth at UI design.
From that, I launched into some feature-specific HMW challenges and started identifying the slew of user scenarios a potential solution might have to respond to.
Taking insight from above, I started drafting storyboards. Actually, they started at user flows, but I felt that creating narratives better communicated these scenarios in real life and I enjoyed the exercise. The following storyboards illustrate Casa operating under context from user input and historical data.
Upon arrival to home, users would enter a marked area, triggering a notification.
When a user pulls out their phone, an actionable notification would prompt them to unlock their front door.
Upon tap, users would be prompted to use TouchID to authorize and unlock, potentially triggering further events.
This storyboard is interesting because it makes use of two interesting technologies applicable in the smart home realm; geofencing and actionable push notifications. Actionable notifications can also be set to happen silently and be waiting for a user right when they need it. In the next storyboard, historical data and context help the home initiate subtle changes in an environment that can help cue the user that it might be time to hit the hay.
Based on historical data, our ideal user usually hits the hay around 10pm, before that time, lights turned on start to dim and the thermostat decreases house temperature.
Either through an app interface or voice assistant, the user launches the “Good Night” scene.
After 30 minutes of user inactivity, house security systems engage.
Before I delved any deeper, I wanted to get a reality check on the state of machine learning and smart home technology. I decided to call up the one guy who knows more about machine learning than anyone else in my network. In the following clip, I speak with Josh Miller; a machine learning buff with a background in IoT, currently finding data insights at Hubspot.
Josh helped me validate some of my ideas and really grounded my understanding of where current capabilities of machine learning lie, as well as how they are improving. We talked about context-awareness, data-gathering woes, and warm morning bathrooms. Knowing the limits of a technology your building with makes for smarter design decisions in concepts, and less friction down the line.
Knowing more the types of functionalities I wanted to include in my concept as well as the key problems I started sketching, keeping in mind to have the areas of most functionality towards the bottom of the screen and usable with one hand, left or right.
Scenes represent the natural abstraction users make when looking to change their home environment. They probably won’t be thinking “living room lights down to 60%, thermostat up three degrees, play Chet Baker on the living room stereo.” Of course not! Perhaps more along the lines of: “It’s been a long day, I need some wine, I need to (launch the scene) Relax.”
While typically scenes are preset by users, they might not have to be. Given enough historical data and user research, the scene above could have been generated with machine learning given basic rules (lower lights/higher heat) and a home environment to adapt them to.
Looking at some of the aforementioned control modules (Scenes, Devices, Rooms, Security etc.), these high fidelity screens demonstrate how arrangement can affect affordance when the context requires certain functionality to be front and center.
This project was a really interesting exercise in designing for the moment. Considering not only when the user will use the product but where they might be or how they might be feeling. Building a mobile experience allowed me to play to the strengths of the smartphone. Characteristics like geolocation, fingerprint or facial based authentication, and always being within reach were all used to make for an orchestrated and personal experience. In smart home interactions, the mobile space is seen as a place to complete smaller interactions and micro-tasks. Abstracting bigger more complicated functions into simple tap interaction presented at the right time was a necessary challenge, and a fun one to solve.