In-Car Interface
Name: Makson Serpa 
Portfolio link:

Since I didn't have any knowledge about in-car interfaces and either self-drive cars I spent part of my studying hours learning how it all works, from Tesla support guide to videos where I was able to observe people testing cars and this kind of interactions. The idea was to gather information from these interfaces structures, its best practices and also get the idea on how a person behaves when exposed to this type of experience. All materials I used for this research you can take a look here .

During my study I took some screenshots so that I could use it as a guideline and also as a way to understand patterns, since it's a new product with few people using it. After I got my references, I wrote down which piece of information had to go in each place. For landscape screens, either the driver or the passenger can touch any place in the middle area, is a safe spot for both of them. The left side is reserved for the drivers interaction and the right side to the passenger. 

With that I started to test different perspectives of what I was thinking as a design of the interface, some kind of brainstorm with crazy 8's, from these ideas I selected one I liked the most and start designing the flow in a really low fidelity to see if it could work.
During this time I was writing possible scenarios for the main flow. With more time I would spend more time on it since I believe it avoids several flows issues and structure mistakes, for this experiment I just went through the possible flows in the user's process to find an artist.

At this moment I was comfortable with the structure, the final version is one interface that is focused on the context and actions, secondary options is displayed when it's contextual to the user actions, and I'm also showing options which is more related to the driver in left corner and when it's an action which is related for both driver/passenger I'm displaying close to the middle where both users can interact with few touches and less effort.

  • The core structure is fully focused to the driver have a quick view of car situation and control, also focused in navigation since this car is fully self-drive car, all happens when the car knows the navigation route.
  • Secondary structure (Max view) Is where users surface and interact with many applications, such as search, interactions that need focus or even surface in web content.
  • Secondary structure (Mini view) show all important actions related to the applications that need to be accessed at glance.
  • Secondary structure (Max view + Relevant content) It split the max view in two parts (20% - 80%) and is useful to show actions or information relevant with the context. (e.g: It show all songs of the album you're playing, with that you can pick that one you want to play from all the album, and also you can navigate on the another part to find new albums and etc.)

I choose Entertainment as my theme and more specific Music, so my objective was to design the main flow for any user to listen a music in one self-driver car and also find his(her) favourite artist though this interface. The car I'm thinking here as a model is a level 4 autonomy car.

  • My objectives was, by opening the music app, show to any user a relevant content when they opened for the first time, turn this user able to surface between songs without many touches and helping them when they're searching thought the keyboard to avoid unnecessary steps.