Unemployment is common for people in the disabled community, with only 19% of working-age individuals able to participate in the workforce. Many workplaces lack the accommodations needed to facilitate a more inclusive workplace.
When it comes to increasing accessibility, you may typically think of the bricks and mortar components, such as wheelchair ramps, wider hallways and special lifts; however, it’s so much more than this. Using the power of innovation, people with disabilities can now work like anyone else.
A pilot project in Japan is trialling how a human-operated avatar robot performs work duties and activities flexibly and freely in a cafe setting.
Through communication control technology, people with disabilities or who are limited by illness are operating an avatar robot through a remote-control function, enabling real-time conversation, customer service and interactions, as well as flexible and precise movements within the environment.
Conventional avatar robots typically have delays between the captured video feed and displayed on the operator’s remote monitor. These delays are usually caused by network ‘best effort’ routing, packet loss and sluggish latency. The time lag makes it difficult to perform detailed movements such as pivoting around obstacles and therefore the operator feels stressed because he or she cannot operate the robot properly. Earlier avatar prototypes had robots moving along a predetermined path in the cafe, which doesn’t reflect intuitive, natural activity.
To solve this problem and leverage the IOWN concept, NTT is rolling out a network-based control technology that enables optical transmission of multiple video streams without delay by precisely controlling the transmission according to the status and characteristics of each avatar robot. This technology can improve the operability of avatar robots by reducing the video delay, enabling smooth movement even in narrow passages such as between desks, and that it can reduce the stress that pilots experience when operating the robot.
In a system that applies this technology, the video delay (event-to-eye) between the video capture by the camera mounted on the avatar robot and its display on the monitor viewed by the pilot is reduced.
We compared the video delay yielded by using the Internet to transmits the video with that yielded by using this time demonstration network, and confirmed that the video delay was reduced by approximately 400 ms (approximately 1/20).
We expect this pilot project to expand to more complex interactions and activities, such as museum guides, retail clerks, and information booth consultants. We will continue to develop how the avatar robot promotes an active and positive place in society.