Have you ever felt like you were having a one-sided conversation with someone? It feels as if you are exerting much effort with minimal feedback or response in return.
When we use an application, we can think of this experience as a conversation between the user and the technology. Sometimes, it feels as if we are having that same one-sided conversation with the technology we are using. As modern people, we learn the ins and outs of the tech we are interacting with, from the information architecture to the layout of the UI elements. Because of this, we adapt to the technology. Just as we adapt to the technology, the technology should also adapt to us. This conversation should not be one-sided.
The metaphor of a “two-way conversation” isn’t new within the field of human-computer interaction; however, we shouldn’t take the metaphor at face value. Just think, when you are talking with a friend, the conversation you are having is most likely being affected by a specific state of affairs. The conversation is sensitive to a scenario. For example, the language, the cadence of speech, the volume, and other parameters may be more or less appropriate depending on the context of the conversation.
Context is KEY
As UX designers in a world with exponential advancements in sensing technology, we ought to have context at the top of our list of things to consider in any design process—especially if we are designing for a “context-fluid” experience.
A context-fluid experience is achieved when a person uses a product in a variety of scenarios and the actual experience of the product is heavily affected by the context of use. As designers, we are (or should be) aware of the importance of contextual design, such as how a UI might be presented differently to a person who is running compared to a person who is relaxing on a couch.
What if we take the idea of contextual design a few steps further? Context is not only defined by the time of day, activity, or physical location of the user. A very important component of context is the user’s psychological state. The context of taking a relaxing night drive to grab some ice cream is completely different than rushing to the hospital for an emergency in the middle of the night. How can we create truly context-sensitive systems for context-fluid experiences?
Enter augmented cognition
The field of augmented cognition strives to manifest computer systems that can “sense” and interpret a user’s cognitive and physiological information, enabling the system to present information accordingly.
Just as humans have built machines to overcome our physical limitations, the field of augmented cognition aims to overcome our cognitive limitations within a human-computer environment. Our cognitive limitations lie within our inability to process massive amounts of information received from every direction (Wladawsky-Berger 2013). With consumer-focused augmented reality on the horizon, we can expect to be designing for not only AR but also for augmenting the human-computer experience by utilizing feedback loops and sensing technology.
Current sensing technologies
Cognitive sensing
For a technological system to adapt to a user’s current cognitive and physiological state, the system must be able to gather this information. A few methods of obtaining cognitive information from a user include functional magnetic resonance imaging (fMRI), magnetoencephalography (MEG), and positron emission tomography (PET) (Ayaz et al., 2010).
However, these methods need cumbersome equipment and are very restrictive to the user. These methods are also extremely expensive, making it difficult and impractical for use outside of a laboratory setting. A more practical method of acquiring cognitive information is by means of functional near infrared spectroscopy (fNIR). fNIR can be employed to monitor changes in oxygenated and deoxygenated hemoglobin at the cortex of the brain.
With the implementation of fNIR, it is now possible to create more practical, portable, obtainable, and relatively inexpensive monitoring systems. There are newer systems available such as the Artinis OctaMon, with a headband-like form factor. Artinis also produces an even smaller monitoring device that is simply an electrode, connecting to a small battery pack. Both of these systems can wirelessly transmit data for real-time monitoring. These new developments would make it seem like fNIR is a suitable method for acquiring cognitive data in controlled laboratory settings as well as mobile contexts.
That said, these systems need to become even more affordable and further miniaturized to be adopted outside of their niche market. For the time being, a more practical means of acquiring psychophysiological data is necessary for use in a consumer setting.
Physiological sensing
Measuring a user’s physiological state is an easier and less invasive process than measuring cognitive states. There is a plethora of wearables on the market today that collect biometric data such as motion, heart rate, skin temperature, perspiration, and more. The advantage of physiological sensing technologies is that they’ve already been widely adopted—think Fitbit, Apple watches, and even Polar heart rate monitors to use at the gym. Furthermore, these sensing methods can also be used to infer a user’s psychological state. For instance there are products on the market today that measure a user’s rate of breathing to infer mood.
With that said, biometric data may only result in a high-level inference of context. For example, by utilizing data such as heart rate, perspiration, and rate of breathing, I still may not be able to discern if the user’s psychological state is that of distress or exuberance. With this data, I may only be able to discern between “calm” and “excited” psychological states. This level of granularity may not be fine enough to design truly context-fluid experiences, so further research in this area is warranted.
Bringing it all together now
Currently, there isn’t a very cost-effective, widely used method of easily discerning a user’s psychophysiological state. That said, as forward-thinking designers and developers, we should be anticipating the ability to acquire and make use of this psychophysiological information.
What we can do currently, however, is think about how to best make use of the available data acquisition methods to create context-sensitive applications for context-fluid experiences.
As designers, it is our job to continue to facilitate and improve the two-way conversation between our technology and its users. Let’s work toward creating meaningful feedback loops between human and computer, thus optimizing the context-fluid experience.
References
Ayaz, H., Willems, B., Bunce, S., Shewokis, P. A., Izzetoglu, K., Hah, S., Deshmukh, A., Onaral, B. (2010). Cognitive Workload Assessment of Air Traffic Controllers Using Optical Brain Imaging Sensors. In T. Marek, W. Karwowski & V. Rice (Eds.), Advances in Understanding Human Performance: Neuroergonomics, Human Factors Design, and Special Populations (pp. 21-32): CRC Press, Taylor & Francis Group
Wladawsky-Berger, Irving (2013) “The Era of Cognitive Computing” IrvingWB.com July 01, 2013. Accessed on May 20, 2014 from http://blog.irvingwb.com/blog/2013/07/the-dawn-of-a-new-era-in-computing.html
Very interesting indeed! To successfully produce context-fluid experiences, it could also be helpful to include eye tracking which is quickly advancing and may be widely available within the next 3 – 5 years. Having the ability to marry all of these data points together will likely also require heavy use of AI techniques, one that stands out here would be deep learning which requires very large data sets and can predictively evaluated a scenario based on limited information after training of the AI model has reached a certain point.
Great article Cam!
Honestly, as a UX designer, I think it depends on the products you are working on. If it is an online interface or mobile app interface, having a brain-computer interface in the future miiight become a reality but I highly doubt it is necessary for checking your bank account or facebook.
Once we step into the realm of AR/VR, the “psychophysiological information” our body is constantly generating can certainly be leveraged to affect our physical 3D environment, but again… what products are we talking about here? Playing a videogame, sure the main character can leverage your basic vitals, EEG waves, etc, but this is just entertainment. I can’t think of one solid application that makes sense to use highly invasive, intimate data as an input—other than in a hospital bed setting. With all the absurd “internet of things” connected home devices coming out, we could use this data to manipulate mood lighting etc, intelligently but that’s a weak application for such rich data. I’ve used the hololens extensively and I can think of some great usecases, but it’s too early for me to trust corporations to have access to my intimate data in good faith. UXers are supposed to help users afterall…. not make a business rich of exploiting data generated from the human condition, which straddles many moral and ethical boundaries.
In a time where we inch closer to a 1984 big brother type society, users will become increasingly reluctant to share their intimate vitals on an always on, internet connected device that is only one hack away from physically harming them. We are already tracking every person with a smart phone in their pocket, you want to capture the output from my human senses and brain activity too? Yikes! I digress…. Nice post though, interesting moral/ethical discussion to be had…
Hi there Samantha, thank you for your thoughtful response!
Ethical discussions are going to be incredibly important as technology continues to move forward. In any use case, the benefits must outweigh the costs in order justify using any product.
I could write an entire post in response but I will try to keep it concise.
1. Regarding biometric data – anyone who uses a smartwatch or activity tracker is already onboard with tracking and logging that type of intimate data. However, I understand that may not the data that is concerning you, or me at this point.
2. Cognitive data – I think that being able to affordably and practically gather this data in a consumer setting is years away. That said, we need to be thinking about the ethical implications and appropriate use cases now. I think it really boils down to data security and justification of use case.
This data perhaps should not be treated like any other data in the world. New security standards may need to be developed if this data is going to be tracked by the average person. That said, the security of this data is really an entire can of worms that would need to be discussed by leading professionals in the field.
My second thought is that of justification. I agree with you that adjusting the color the lights in your home to suit your mood isn’t a valid use case for leveraging this type of data.
However, I do believe that there are certain cases where the benefits of sensing and interpreting cognitive data outweigh the costs. The primary reason for doing so would be to reduce cognitive load in potentially high stress situations where personal safety and the safety of others is a main concern. For example, working in manufacturing where heavy machinery is involved, driving a bus full of people in a crowded city, piloting an airplane, etc.
Of course research into each of these potential use cases is warranted, but if it turns out that lives could potentially be saved by minimizing cognitive load and user error through sensing, interpreting, and responding to cognitive information of the operator/user – I would said that utilizing such methodology is justified.
There most definitely is more room for conversation concerning the costs and benefits of sensing cognitive data in more “relaxed” settings (is it even justifiable?) The risk of blindly implementing this type of technology could be extremely high. Much consideration and care would need to go into designing any application that senses and responds to cognitive data.
Thank you again Samantha for your thought provoking response. Connect with me on LinkedIn if you would like to continue the conversation!
-Cameron