Sixth Design Sprint: Lists with feedback

 
 

Creating non-visual feedback

The Initial Idea for this week was to explore how we might be able to move visual cues for understanding of our location in a menu into physical feedback.  If we think of list of items, we rely on our visual senses to interpret our place in the list, and how we need to interact to get to what we are looking for. 

 
 

If we think about interacting with the same list without looking at the screen, we can see that this is now a much harder task. Visual feedback such as a scrollbar, alphabetical or numerical ordering and the top and bottom of a list provide a number of different sources of information that we use to interpret our place within a system.

The goal of this sprint was to explore how we might create feedback within a physical control that can assist with non-visual placefinding.

Process

This sprint saw a return to the rotary dial as a control device. Whereas in previous sprints I explored how user input might effect a control, during this sprint I wanted to explore how a control might provide feedback to the user.

 
 

The initial prototype contained a rotary encoder, a solenoid, and a DC motor. The DC was attached via a reduction gear to the rotary encoder. The idea would be that when the user turns the wheel past the end of a list, the wheel would then spring back, letting the user know that there is no further content to be scrolled through. 

 
 

The DC motor was programmed to trigger when the user got to the end of the list, in the opposite direction to the direction that the user had been scrolling. This worked quite well in creating the feeling of hitting an invisible wall.

What I learned

This week was quite useful in exploring how to create a physical interpretation of a digital concept. In doing the work this week I began to see a lot of possibilities in terms of how this might be expanded upon. 

I hope to explore this concept further as I begin the refinement stages of the project.

Fifth Design Sprint: Geared Modes

 
 

Investigating dynamic mapping as a way to combat mode confusion

The aim of this sprint was to try and see how a users place in a system can be better understood by the user by creating a physical representation of their location. The work in this sprint focussed on the idea of clearly identifying the mode of the interface.

 
 

What is a mode? 

A mode is a segment of an interface that is designated to one specific task. Almost all controls within a mode relate to one specific function

In his book The Humane Interface, Jef Raskin defines a mode as:

"An human-machine interface is modal with respect to a given gesture when (1) the current state of the interface is not the user's locus of attention and (2) the interface will execute one among several different responses to the gesture, depending on the system's current state." (Page 42)."

Within the context of an HMI system, modes are important ways in which drivers can complete specific subsets of tasks that are related to each other. These might be things like navigation, entertainment, communication or car control.

Mode Confusion

Mode confusion is one of the most common causes of error with interactive systems. It occurs when a user has the wrong understanding of the current mode of a system, and engages with the system as if they are in another mode.

This is a major problem in automated flight systems, where pilots need to have clear understanding of how much control they have over the airplane at any given time. 

In HMI systems the problem is less severe, however it is still important as if the driver has a clear idea of the mode of the system, it will reduce the amount of attention they need to give the interaction with the system.

 
 

Process

During this week I drew on some existing experiences I had while working on HMI systems during my internship at Frog Design. My aim was to explore how modes might be explicitly communicated to the user through the physical nature of the control. 

The initial inspiration was the idea of the forked gear lever in manual cars. This control provides the user with a clear physical representation of which gear the car is in by having a unique physical position for each gear. The driver needs to physically move the control from one position to another to change the gear. 

 
 

Natural (but dynamic) Mapping

My goal was to explore how mapping between a screen interface and a physical control might be enhanced by making the link between the two really clear. 

I used processing to create a simple mockup of an interface with three distinct tabs representing three modes of an HMI system. The use of tabs is a common method of segmentation in HMI so this was a good concept to test. The idea was to link the function of a touch interface with the function of a physical control, allowing them to represent the same function at the same the time.

This means that when the screen was in the red tab, the control would move to the same location and allow access to those functions. The controls were linked so that interacting with either the screen or with the physical controls would both adjust the other to present the same function.

 

What I Learned

This sprint was valuable as it helped me to explore the value of representation at all stages of design that began to emerge during the second sprint. 

The relationship between the physical control and the screen was also really interesting to explore. Mapping seems to really help with understandability of location within a system, and being able to provide some additional information with the location of the physical control seems to have benefits as well.

 

Fourth Design Sprint: Dual User Interfaces

 
 

Designing experiences for multiple users

 At this point in the project I thought it would be important to try and reflect on the role of the driver and passenger with the car. The car experience is quite unique as both passenger and driver have different needs when interacting with an HMI system, and also have defined positions within the car.

 
 

Process

The aim of this week was to explore and sketch out an interface that could identify and adjust to the users who are interacting with the system. This is of importance in a driving context as the driver must maintain as much attention on the act of driving as possible, while the passenger has no such constraint. 

This essentially means that there is no one interface that will best suit the driver and a passenger. It was with this in mind that I began to sketch based on this concept.

 

 
 

The prototype was built in framer and processing. The screen consists of a menu with two states, one for the driver and one for the passenger.

As the user begins to interact with the screen, the interface will adjust to the specific role of the user. The driver will get an interface with larger, clearer information with lower information density, while the passenger will have more detailed controls and deeper menus.

The prototype uses a leap motion to detect which hand the user is using, this gives a pretty good analogue to the driving situation as you can work out which person is most likely to be using the interface.

What I learned

Despite the simplicity of the build, this prototype was actually really well received by people I tested it with. There is some quality to the interface changing when you interact with it that people really responded well to.  It was at this point in the project that the idea of a dialogue between the user and the controller began to take shape. 

 

Third Design Sprint: Stacked Controls

 
 

Stacked Controls

The aim of this sprint was to combine a number of similar types of push buttons into one device. In modern HMI systems physical controls there is often limited space available in the centre stack, and so controls need to be multifunctional. 

The idea of placing multiple controls within one device is called Multiplexing, which is a topic I discussed further in my research phase. In this sprint I wanted to explore the concept of multiplexing by trying to explore how I might be able to incorporate a number of different modalities of a button into one object to create a multi functional device.

 
 

Process

The idea was to create a device that would allow for a number of different controls. My goal was to explore a different control method than the previous weeks. I started by thinking about buttons as the modality I wanted to explore. 

The initial design featured a puck-like disc that housed capacitive touch strips on the sides. The sides could also be pushed in to activate two more buttons internally. 

Initial testing on this proved that the pushing force needed to activate the internal buttons was quite difficult for most people, so I modified this so that the buttons were easier to activate.

 

The Joy-puck

Directional control is a key function of most HMI controls. To create this functionality I installed the puck onto a raised spring assembly. Under this four buttons were located in an Up-Down-Left-Right arrangement to create a joystick. 

 
 

 

The joypuck offers the user three different levels of control. The user can control the directional buttons, or these can be controlled in conjunction with either the tap functionality or the press functionality. 

 

 
 

What I learned

This week was probably the least successful of the work that has occurred so far. One of the outcomes of this week was that it became even more clear that designing for functionality of a control requires a very close relationship between the design of the screens that go with them

There is a close relationship between the information architecture of the system and how a control mechanism works, and vice versa. This was a valuable learning that must be reflected upon when building physical controls.

Second Design Sprint: Push (not pull) wheel

The Push (not pull) wheel

Exploring force as an input.

The aim of this sprint was to explore how variable levels of interaction we have with an input might effect how we interact with a simple menu.

 

Building on previous knowledge.

This week was designed to try and free myself from getting stuck on just one idea. I had a concept in my head for a while before starting the degree project, and I thought I should just try and do it in a week to get it out of my system and move on to a more explorative approach to the problem.

The initial idea was built upon two existing works. The force touch track pad from apple and also by the Stacked UI project by Rob Nero at Malmö University.

My goal with this week was to try and explore the implications of force input on a menu system, by creating a dial that has different levels of control based on how much input you make.

Process

The process of creating the object was fairly straightforward. The box houses a force sensitive resistor, which changes resistance based on the amount of pressure placed on it. This is housed in the bottom of the box, and when you push down, the resistance increases.

This was combined with a solenoid to create the feedback. If the solenoid is activated for different lengths of time, it creates different amounts of clicking, from small "ticks" to larger, more clicky sounds. This allows for different levels of feedback to be passed on to the user, creating a sense of scale to the interaction.

The final element was the rotary encoder. This controls the ticking sound by triggering a function every time a multiple of 2 is met. In the deeper menu levels, the ticking happens every multiple of 5, then 10, giving the feeling of more defined spaces rather than fine control. 

What I learned

This week was good as I was able to get to an outcome by the end of the week. I felt that I was able to reach the point I wanted to, and I created an object that I could then reflect upon for future weeks.

I also learned in showing the concept to people that physical affordance was important even at early prototype stage. Making sure the objects are "saying" what you want them to is important at all stages.

First Design Sprint: The Encoder Exploder

The Encoder Exploder

Rethinking the rotary encoder

Background

One of the main interaction paradigms in the automotive space is the use of the rotary dial. This type of interaction is quite handy in interface design as it fits into two seperate mental models quite easily. This type of control affords the opportunity to move left and right through item selection, and also up and down in a list of items. 

Demonstrating the versatility of a rotary encoder.

Demonstrating the versatility of a rotary encoder.

Approach

This first sprint was setup as a way to start investigating and reproducing components. It was also as a way to start getting a grip with the interaction patterns associated with automotive design and start to building a framework for design principles.

What I did

I started by taking a simple rotary encoder that can be purchased just about anywhere, mine came from Elektrokit here in Sweden. 

These are different from a potentiometer, in that instead of measuring resistance across a set range in order to measure position, these can rotate infinitely. 

They measure the movement and the direction of rotation by sending a series of on and off signals to a micro controller, this is called gray code.  

The microcontroller can then decode the sequence in which the signals have been recieved in order to work out which way the encoder is being turned. Once a signal sequence is complete, a value can be increased or lowered by working out the direction. 

I wanted to see if it was possible to replicate this signal creation, and make my own encoder. I started by pulling one apart to workout exactly how it worked. This type is a mechanical encoder, and they are based on two signal pins being linked to the ground by a brush that is in the knob.

Exploring  Components

 
 

I took close up photos of the mechanism in the encoder and I found some interesting things. There are actually two different circuits that operate independently. One for the encoder and one for the pushbutton. In my head I thought they would somehow be related to each other, however they are completely independent. 

In addition the clicky feeling of the encoders rotation is actually a mechanical component. The click has nothing to do with the operation, and is simply to offer some greater precision when making a movement, and so accidental movement can be minimised.

I then started to prototype based on this principle. Firstly with a paper device, which didn't really work as maintaining contact with paper alone was difficult. I was able to make the connections work without rotation using a multimeter to check if the circuit was completing in the correct order. This encoder sends the signal in the pattern of Signal A - On, Signal B -On, Signal A -Off, Signal B Off. 

From this first prototype I was able to tell that the concept was working, and I decided to scale up to MDF for a second prototype.

 

I lasercut an MDF profile of the circuit mechanism at round 20:1 scale. I thought this would be good as a way to really be able to see if it was working, and also as it would be easy to manipulate if it didn't work. 

I laminated aluminium foil to create some conductive components. Doing this meant I was able to recreate the two channel circuit of the rotary encoder.

 

 
 
 

What I learned

I think this first sprint was a great way to start to engage with hardware as a design tool. By taking things apart and working out how they work, it is possible to take things that might seem like they are too difficult to build and see them as what they should be, which is tools we can use to build things. 

I think as interaction designers we can sometimes get a bit disconnected from the technology we are designing. I often times hear people saying "I'm not a engineer, so i can't do that", or "I'm not a coder so I don't need to worry about that."

I think that that mindset creates a distance from the tools of design. To establish a concept of expression with digital components and controls, we need to see them as tools in our belt, or paint for our brush.

Research Summary Pt. 4 Usability and Multiplexing

 
 

FEEDFORWARD

Feedforward is the idea in user interface design that a function is able to give some indication of what will happen when we interact with it. How we do this is really at the will of the designer, and we must have a keen understanding of signifiers, affordances and information display in order to manage this. In tangible products Djajadiningrat et al note:

“For a control to say something about the function that it triggers, we need to move away from designs in which all controls look the same.”

MAPPING

Mapping is the relationship between how information is carried across physical and digtial mediums. For example if a row of switches operates a row of lights in the same order, this might be considered a natural mapping, as the spatial relationship of the switches matches the spatial relationship of the lights. 

Difficulties occur when we begin to map abstract concepts we cannot physically interact with Djajadiningrat et al highlight

“Natural mapping falls short when dealing with abstract data that has no physical counterpart.”

CONSTRAINTS

Contraints are the ways in which physical controls offer specific limitations with how a system is used. If we think about our Mercedes and Tesla examples, we can see these emerging. The driving experience is already very defined, yet there are additional constraints imposed by digital systems and methods of control.

Multiplexing

Multiplexing is the concept of allowing mulitple methods of input to happen with one device. Maybe the best example of this concept is a computer mouse click. We can move the device and a click will have a different effect on the system, we can double click and that will have another effect again, or we can click and drag.

These different gestures create multiple methods of input possible with one modality of interaction. Multiplexing breaks down into two main methods. Spatial, and Time Based Multiplexing

Spatial Multiplexing

 

Time Based Multiplexing

Research Summary Pt. 3 Cognitive Load Theory

 
 

Cognitive Load Theory

Cognitive load is the amount of stress our ability to store information in memory is placed under by any given task. Classic cognitive load theory defines three main types of cognitive load.

Intrinsic

Intrinsic load is defined by the complexity of the information to be learned. The amount is determined by the interactivity of the elements being learned. So for example learning words in a foreign language is relatively straightforward, however learning grammar is a lot harder, due to the interactivity of the words in a phrase.

Extraneous

Extraneous load is caused by inappropriate display of information, or irrelevant information, or having to combine spatially different information. So if an operator of a power plant has to apply safety information displayed in one format into an emergency happening in another format, then there is an increased extraneous load placed on him or her.

Germane

Germane cognitive load is essentially the idea of practicing things to get better. In this case a certain amount of load is required to construct and automate "schemas" which allow for easier completion of the task in the future.

Multimodailty

There is some concept of multimodality in most things we do, and this might be because it is actually a way in which we deal with cognitive load. Think of an example of giving instructions to a tourist, usually we point at things, we wave and create gestures as a way to get a more complex message across

As Sharon Oviatt says

“Users respond to dynamic changes in their own working memory limitations and cognitive load by shifting to more multimodal communication as load increases with task difficulty as a result, a flexible multimodal interface supports users in self-managing their cognitive load and minimizing related performance errors while solving complex real-world tasks”.