One of my favourite talks at the Interaction 16 conference was Smart Frictions, by Simone Rebaudengo and Nicolas Nova. I've been really fascinated by smart objects and the oh-so- hot term 'IoT' lately since it feels like every product seems to be getting 'smarter' these days. In this dual talk, Simone and Nicolas explored some issues in 'smart' technology, connected devices and their uses and misuses.
Simone Rebaudengo opened this talk by explaining what I/O meant. I for input, O for output, but that the '/' was an undefined thing filled with 'smart' black boxes... and we don't really know how it works. First we dismantled our understanding and assumptions of the term 'smart'. When a product is smart, we have assumptions that change the way we interact with it and end up making expectations that influence the way we experience its flaws. We believe that objects are neutral and objective in nature but what if we designed objects that were transparently biased? How do we influence behaviour? "Experiences with “smart” products seems to converge into a passive taking over of tasks that hides all the complexity and control behind “simple” interfaces."
What interfaces can we design to avoid turning people into unaware and passive bystanders? How can a product adapt its smartness to a range of various users’ profile in order to fit their culture / desires / situations?
Teacher of Algorithms
Taking a look at robot vacuums, we can see that they really aren't that smart for the most part often getting stuck in the corners or under the couch. Smart objects evolve as they learn and interpret our habits, but how much smarter might they be if we could train and teach their algorithms to enhance their decisions? What if things are not that good at learning after all without some help? Simone showed us a video from ThingTank where they're training smart objects such as robot vacuums with a stick, much like punish/reward hacking in order to teach it to move around and clean 'smart'. (see 2:38)
Politics of Power
"This project looks at how a mass-manufactured product - although developed for a precise and unique purpose - could behave differently depending on the nature of its communication protocol and how the design of the product itself could reflect these hidden logic and rules.
In every existing network - be it machine or nature, rules are established in order to determine its structure, hierarchy, and the way the communication will be synchronized between all the actors of the network. But who and what criterions will define this power hierarchy? Products and networks are inherently embedded with ideologies of the designers, engineers, and other stakeholders who shape their trajectory along the way."
If there was a power shortage, how do our machines work in parallel? Well, they can't which is why we begin to question how does the power of politics work and who should be given the priority?
For a more in-depth analysis of the work that Automato has been delving into, check out their site.
Nicolas Nova takes over the talk and further discusses what smartness means. Right now, we adjust to our technology. For example, a driver asks Siri to call her friend but Siri isn't familiar with the pronunciation of the name and instead the driver needs to pronounce the name inaccurately in order for Siri to understand. He explains how smartness is not neutral and that we need to be a good teacher in order for our devices to be 'smart'...but what if we are bad at teaching? What if we have lazy behaviours? Do we simplify ourselves to machines? How much responsibility do humans need in 'smart' devices? When do you need control and when do we let the machine take over? We need to find the in-between.
What we should aim for:
Smart → Clever
Automation → Assistive
Optimized → Resourceful
Magic → Expectable
Intelligent → Perspicacious
Predictive → Perceptive
Overall, this talk asked a lot of questions that left me very curious about what future technology will be like, and how smart they will be, and how smart we will be as well.