FONT & AUDIO
Designers feel human again
Four speakers discussed the mainstreaming of artificial intelligence Wednesday night at Design Week Portland's forum called Empathy, Bots, & the Future of Artificial Intelligence
Hosted by digital agency Particle Design, the first speaker was Crystal Rutland, Particle's CEO and founder. Asked about the book "The Flight from Conversation" and people using their addictive phones to not connect, Rutland said "We do have a responsibility because people don't always do what's good for them."
She talked about apps like Moment, which track your digital usage. "But there could also be ways to design systems that automatically and in a very updated way have conversations with you about what is going on and help you experience the real world a little bit more."
What else can designers do?
"As UX (user experience) designers we always start with the question 'What problem are we solving?' If you can't really come up with an answer why are you doing it?"
And in the future, we're probably dealing with more shared screens than personal screens, for example "a window that serves up information in tiny contextual bites." Or maybe no screens at all, like Alexa and Siri.
Josh Lovejoy showed a lot of stills and videos taken from a little offline camera called Clips, cheap and small and able to clip to anything. It was designed at Google to take candid photos so you wouldn't have to fumble for a camera to capture a memorable moments and candid expressions. The on-device machine learning software whizzes through hours of images looking for the best bits, because who has time for that? Lovejoy is Lead UX for People + AI Research (PAIR) at Google and he said he is "on a mission to bring deep humanity to AI," which means three things: One, doing the work to understand and address human needs. "If we don't, we're going to build giant systems that solve really, really small or non-existent problems. We have to do the work to guide these intelligences.
Two, recognize that "machine learning can't do anything humans can't do. There's no magical wake up one morning and the robots have legs and are walking around on their own." Machine learning seeks out patterns and is then told by humans what is good and bad, so it can pick up on little details.
Third, "We have to get back to making sure it works for real people, and the notion of trust."
His best example of someone who knows how to sort through photo rolls is the wedding photographer. He ended up working with three professional photographers to get their world view. One thing the camera had to learn was to ignore close ups of hands, which is what it saw as people turned it on and off.
"There are questions you can always ask someone when you are working on AI. What actual predictions are we trying to make, and what can we confidently not care about?"
Human beings are bad at agreeing at what's good – just look at picking a restaurant. "But if you start with understanding what's not OK it's a broader space which allows people to make more confident decisions for themselves."
He concluded that AI cannot help people find the needle in the haystack. "It shows you how much hay can be cleared away so you can find the needle."
Anima Anandkumar is both a principal scientist at Amazon AI and the Bren Professor at CalTech. Her speech introduced Amazon as an AI pioneer – the purchase recommendations have been around for a nearly two decades – and then outlined Amazon's newer AI efforts, including drone delivery (PrimeAir) and checkerless shops (AmazonGo). Anandkumar said that Amazon is making AI available to all developers through its Amazon Web Services Platform, which could lead to a lot more instant processing of live images and sounds.
Brian David Johnson was Intel's first official futurist and is now at Arizona State University as Futurist in Residence at the Center for Science and Imagination. He also runs a "threatcasting lab" which looks ahead at best and worst case social scenarios. His message was that after AI, when robots are common in land, sea and air, there will come "sentient tools." These will be tools that can think will not replace human intelligence.
He pulled up an image of a robot in a chair saying he hated it. For one, robots squat, they don't need chairs. And two, it was threatening to humans, suggesting they have been replaced. He stressed that decisions made in fear are usually wrong. "Fight, flight or freeze usually leads to stupid decisions," he said, including designers.
Although robots will be catastrophic to the job market as it is now – a lot of boring jobs will be taken away from humans, from driving to sorting – that will just free us up to be more human. "If a robot can take your job, you job probably sucked," he said. "But your pay check didn't suck."
Designers must ask themselves 'How can we change the future?' Johnson's answer was we should all change the story they tell about the future.
"The best way to futureproof your life is to be human. We like other humans," he said. "Reject the notion that the whole system is automatable."
You count on us to stay informed and we depend on you to fund our efforts. Quality local journalism takes time and money. Please support us to protect the future of community journalism.