Fluctuating productivity amongst the robots
For a processor (robot or human) productivity is best measured as a ratio of output:input. How much work did we get out for the amount of time we put in? For this to make sense we generally convert time into “capacity to do work” based on some idea of how much work could be done in a given time.
So if Person A completes 75 tasks in a day and they had capacity to complete 100 then their productivity was 75%. Similarly if Robot B completes 500 tasks in a day and had capacity to do 1,000 then their productivity would be 50%.
But why would Robot B only do 500 tasks? They wouldn’t dawdle because they didn’t like their boss. They wouldn’t spend hours on social media, and they would surely only be allocated tasks that they were 100% capable of processing. We wouldn’t give the robot a motivational pep talk, or offer it some incentive to “work harder” because the answer obviously lies in the system, not in the robot.
Maybe Robot B could only process 500 tasks because there were only 500 available to be done. Maybe the core system was running incredibly slowly that day, or there was so much network traffic that latency was affecting cycle times. Maybe someone changed a port on a firewall and the robot needed to be reset. Or there were hundreds of exceptions and the robot had to try them multiple times before rejecting them.
As an aside: it is strange (isn’t it?) that if a person’s productivity is 50% we assume idleness, a propensity to waste time on social media, or a lack of skill but if it is a robot we quickly understand that it is the work flow that is the problem.
Data-focused technologies such as Process Forensics and some digital operations management technologies or WFO technologies that seek to improve performance by URL logging or other screen monitoring techniques are totally missing the point: people’s productivity is far more influenced by the flow of work through the system than it is by their willingness to work or their skill level (unless you really work hard at alienating and underinvesting in your workforce). Workforce monitoring technologies seek to intimidate people into working harder, but you can’t intimidate people into having more work available to do.
Returning to our main line of argument: a robot could have 50% (or lower) productivity if the system were out of balance. Fluctuating demand, bottlenecks in the workflow, variations in work complexity will all drive variations in productivity – as with people, so it is with robots.
So, what can be done?
Finding the right scheduling technologies
While easy, real time data capture is essential to support organisation learning and good, fair and informed decision making, some Digital Operations Management and WFO technologies appear to place too great an emphasis on capturing data on people and then using it to try to punish people into performing better. If you doubt this, just look at the way some technologies promise to improve productivity by spying on employees’ use of social media in work time.
These technologies would clearly not help improve the productivity of RPA resource and are, in fact, worse than useless at improving productivity of human resources.
Workforce optimisation should actually be about creating a smooth flow of work and about balancing work demand and resource availability. Digital Operations management and WFO technologies that will succeed in managing human/RPA blended environments will have the following features:
- Support for decentralisation and empowerment of team leaders. Agility and responsiveness come from putting control as close to the customer as possible. Team leaders are best placed to optimise their team’s performance.
- Promote recognition that the system surrounding the flow of work is a major influence on performance, leading to data being used to measure and improve the system rather than monitor and control the individual.
- Place emphasis on forward planning and scheduling to improve the balance of work demand and resource availability.
- Use data primarily to create opportunities for learning and improving the scheduling process – improving flow, eliminating failure demand (that results from poor service or poor quality) and for targeting positive, developmental interventions.
- Enable the explicit balancing of trade-offs between efficiency and service levels so that customer expectations are always met at the optimised cost.
Neil Bentley has been helping organizations to improve their front-line operating performance for over 20 years. Originally qualified in Psychology, he went on to work at Lucas Industries in the 1980s, gaining experience in manufacturing production management, before focusing on financial services and the public sector, first with PA Consulting Group and then as a partner with specialist consultants OCP.
He launched ActiveOps with fellow OCP partner Richard Jeffery in 2005. Neil brings with him an unparalleled understanding of the mix of the human and the technical aspects of performance improvement.