What our intimate interactions with robots will look like – CNET

The hollow ball rattled back and forth across the table between me and the robot. Sometimes I outplayed the robot. More often, the robot outsmarted me.

That was at CES, a show that keeps you moving, but my heart never raced faster than when I was squaring off against Forpheus.

Forpheus, made by a company called Omron, used a combination of sensors on my pingpong paddle and five cameras to understand where I was, and where and how I was moving. This allowed the robot to return the ball to me when I batted it over the net with a paddle of its own, but it also used its AI skills to analyze my play and provide voice feedback to help me quash my bad habits and improve my game.

The demonstration wasn’t just about fun and games. The robot’s ability to respond to my movements illustrates the increasingly complex and rewarding ways robots are starting to interact with us. Thanks to a bunch of technologies that allow robots to understand us and our needs in new ways, they’re poised to play a much a bigger role in our lives.

That nuance is key to transforming existing robots — mostly large automatons in factories or a gimmick at a shopping mall — legitimate alternatives for companionship or coaching. We’re already starting to see robots appear in retail, medical and educational environments, and they’ll also start to make their way into our homes. Research firm Frost & Sullivan estimates that personal robots will be a $19 billion market by 2020, suggesting that we’ll come across many robots down the road.

“In the future, robots will work in collaboration with human beings in the same environment, with enhanced human-like capabilities to carry out daily activities,” said Vijay Narayanan, analyst at Frost & Sullivan, in a statement.

Like Forpheus, this might mean they’re teaching you to get better at sports. Or, with Samsung’s robots, they could be monitoring your health or collaborating with you to provide physical assistance. Perhaps they’ll come in the form of a mobile vessel like the Alexa-powered Temi. Or they may get you feeling less lonely and more open to the possibilities of creating loving relationships, like Groove X’s adorable, blinky-eyed Lovot.

Samsung's wearable GEMS robot attached to a mannequin's waist and thighs.

Samsung’s wearable GEMS robots work with the person they’re clinging to.


James Martin/CNET

Over the past few years, we’ve gotten familiar with interacting with voice assistants, such as Alexa, Google Assistant and Siri, but voice is just one interface we’ll be using to communicate with robots. They’ll also be using other cues to understand us — here are some I’ve experienced in the flesh.

Touching, feeling, freewheeling

One way we can distinguish actual robots from other smart objects or devices is through their physical interaction with the world or people around them — particularly when they possess autonomous navigation skills. For Temi, the ability to move around is the one thing that sets it apart from a static smart home hub like the Amazon Echo Show.

At the other end of the spectrum are robots that move only in collaboration with humans, rather than autonomously around us. Samsung’s wearable GEM robots, for example, are designed to equip people who have mobility issues with bionic powers. These robotic exoskeletons literally take the weight of physical human labor by enhancing the capabilities of ankles, knees or hips.

But robots don’t have to be wearable to respond to human touch. When we look at droids like Lovot, Groove X’s loneliness-curing companion robot, you can start to see how robots can react physically to the actions of humans. The touch sensors in Lovot let the robot understand when it’s being picked up (provoking it to tuck its wheels away), when it’s being tickled (making it laugh) or when it’s being hugged (making it emit heat, and encouraging it to go to sleep).

http://www.cnet.com/


Now playing:
Watch this:

Lovot, the love robot, offers a warm hug of friendship…



2:29

I spy with my robot eye

My reunion with SoftBank’s Pepper — a robot I’ve met on many previous occasions, including through a game of Cards Against Humanity — showed me that once a robot automatically knows and understands who I am, we can take our interactions to a whole new level.

In this particular demo, I dabbled in some online shopping through an iPad, but picked up my purchase in store. As I went through the payment process, I was asked to submit a photo of my face using the iPad camera.

As a result, Pepper greeted me by name when I visited SoftBank’s makeshift store in a suite at the Venetian Hotel. It then brought up my order and asked me if I’d like to collect it. When I said yes, a human appeared beside Pepper to present me with my shirt.

It was a seamless experience that allowed me to avoid all the usual annoyances of ordering online and collecting in store — an experience that includes standing in line at a desk, finding and relaying an order number, and waiting while the person who took my details rifles through a stockroom and reappears with my item.

Pepper the robot at Softbank's CES marketplace. A display on Pepper's chest reads "Pick up your order here."

Always a pleasure doing business with Pepper.


Katie Collins/CNET

Pepper also gained a set of new powers in 2018 thanks to Affectiva’s Emotion AI, which uses vocal and visual cues to interpret human emotion. In the long run, the ever-improving ability to perceive and interpret emotions should allow for more meaningful interactions between humans and companion or customer service robots.

This could be especially useful for robots like Lovot, which are intended as companions. Sitting on top of Lovot’s head is a facial recognition camera that can distinguish up to 1,000 faces, helping the robot to understand who you are and to establish a relationship with you.

Physical contact is an important factor in encouraging humans to bond with robots, Kaname Hayashi, creator of Lovot, told me on the show floor at CES. But it’s not the only key element to this process. Eye contact and the robot’s ability to recognize you are also important, he said.

Merging vision and voice

Then there’s Gerard, who can read eye contact and gestures, understand and navigate its immediate environment, and respond to voice commands. Gerard, it turns out, is more capable and coordinated than most humans after they’ve spent a couple of hours in the pub.

The semi-humanoid robot (wearing a bowler hat and bowtie) is the product of Synapse, owned by research firm Cambridge Consultants. Gerard, having explored and mapped the vicinity, understood where it was positioned in relation to the surrounding furniture. When you looked directly at it, it noticed your eye contact, locked onto you and watched and listened as you gave it instructions.

In this case, the order was: “Turn on the lamp.” Gerard noticed exactly which lamp I was pointing to and followed my instructions precisely. Being both seen and heard by a robot made it feel less like an awkward novelty and more like a genuinely useful interaction.

Gerard the butler robot.

Gerard knows which light you mean.


Synapse

This time around Gerard was stationary, but in the future, you could potentially point to an object and ask the robot to fetch it for you — truly fulfilling the vision of a robot butler. This tech combo also allows you to ditch the trigger words (“OK Google,”http://www.cnet.com/”Hey Siri”) you usually need to “wake” the software.

Thanks to its facial recognition skills, the robot knows when you’re looking directly at it and can recognize you, enabling much more fluid interactions that are reminiscent of the way we interact with other humans. It’s an example of robots adapting to existing human behaviors, rather than asking humans to develop new behaviors to incorporate robots into our lives.

Robots-to-robot interactions

As well as collaborating with humans, robots are increasingly learning how to interact with each other in order to serve us better. Often these collaborations take place between robots of the same type, which work across a common platform.

At Incheon Airport in South Korea, when one of the 14 robots on site sees a crowd forming on one side of the airport, the other robots know to come and guide people to the emptier side.

“They look like individual robots, but they’re all connected by a common platform,” said LG CTO I.P Park in an interview at CES. “They learn from each other, share experiences.”

SoftBank is taking things a step further by encouraging different types of robots made by different manufacturers to work seamlessly with one another.

The second part of my SoftBank demo at CES involved Pepper providing me with the option to buy more items when I collected my order in store. Just around the corner, a second robot — Tally of Simbe Robotics — was scanning store shelves to count stock and share information back to Pepper about what items were overstocked and needed to be offloaded to shoppers like me.

Where we need to go (together)

As exciting as all these developments are, very few of them are ready to transform our relationships just yet.

“To enter the mainstream, robotics will have to plug the gaps in technology,” said Narayanan. “Scientists are still working on improving the speech interaction, navigation capabilities, and emotion detection capabilities of the current generation of robots.”

This is slowly starting to happen. At CES this year, I saw a whole bunch of examples of all these technologies embedded within robots — although some were at more advanced stages than others. With a massive proliferation in the number of robots, and the contexts in which they’re used, the current focus seems to be smoothing the interactions between humans and robots, as well as the way robots of different kinds are able to communicate with one another.

“Everything that we do is about the man-machine interface,” said Nigel Blakeway, CEO of Omron. “As we see the technology developing, and the needs of society developing, we see that there’s a very big need for man and machine to work in harmony.”

So while the tech is steadily improving, the only real question left for humans is: Are we prepared to make friends with and work alongside our new robot friends and pingpong coaches? Because they’re coming for us, whether we’re ready or not.

CES 2019: Every story: See all of CNET’s coverage of the year’s biggest tech show.

My week with Aibo: What it’s like to live with Sony’s robot dog.

source: cnet.com