With automation and artificial intelligence fundamentally changing the way we do business, what moral responsibility do bosses have to support employees in this brave new technological era? We asked leading thinkers for their views.
Chair of the House of Lords Select Committee on Artificial Intelligence
It’s important that technology be our servant, not our master. Our job on the Lords Artificial Intelligence (AI) Committee is to understand what the governance mechanisms should be, without falling into a negative narrative about the perceived threat of AI and automation.
The Committee has come up with a code of ethics, similar to the medical profession, which states that AI should be fair, accessible and used for human good. The next stage is how organisations adhere to these ethical principles, perhaps through ethical advisory boards and AI ‘auditors’, ethical design or transparency within algorithms.
Business chiefs have a moral obligation to the workforce and if they neglect it they’ll lose public trust. At the government level, we need to promote ethical frameworks and regulate where appropriate.
“It’s great that people will have more choice as to how they work, but they shouldn’t have conditions imposed on them through technology.” Lord Clement-Jones
As the nature of jobs change, companies that plan, adapt and provide opportunities to develop new skills will be rewarded with an engaged workforce. But they need to do it now. Lessons from history show that new technology can change dramatically in just a few years, so we can't afford to wait.
It’s great that people will have more choice as to how they work, but they shouldn’t have conditions imposed on them through technology.
For companies transitioning to new technologies, I would advise to involve all departments, particularly HR, and not just the FD and CTO. Adoption shouldn’t be down to techies alone, the process needs to engage people across the organisation
Senior lecturer in governance of advanced and emerging technologies, University of Derby
Employers do have an ethical responsibility in the area of automation and AI. In fact, it’s enshrined in the UK Companies’ Act 2006, which states that a director must act in a way they consider most likely to promote the success of a company for the benefit of its members as a whole.
The Act lists six elements strongly related to ethical behaviour. But, as we saw in the case of the Cambridge Analytica fiasco during 2018, where ethical behaviour took a back seat to revenue and profit generation, these principles were nowhere to be seen and the company collapsed as a result.
The impact of AI on the workplace is itself an open question. During a single week last November, five leading companies admitted defeat in developing much-hyped technologies and announced they would be changing direction.
These included Amazon failing to get machine learning to work for recruitment processing, Uber and Lyft scaling back their own autonomous vehicle systems, and Google suggesting that fully autonomous cars will not be available to operate in all conditions. These are just the tip of the iceberg. We hear every day of technologies that have been oversold and are not actually delivering.
"We hear every day of technologies that have been oversold and are not actually delivering." Richard Self
Employment Law Partner at Taylor Vinters
The key is to get humans and machines working together as part of a blended workforce, pooling their collective capabilities with each focusing on what they are good at.
When considering the potential benefits of AI and automation, business leaders should consider two important things. First, trust, transparency and ethics are priority leadership issues for organisations embracing new technologies. If mishandled or misunderstood, AI could easily replicate, and potentially exacerbate, human biases or make decisions that have unintended consequences.
Second, the introduction of technology into the workplace is likely to be much smoother when employees understand why it is being done and have a genuine opportunity to air concerns or make suggestions. Innovation itself should not be seen as a threat, but something to be celebrated. Early and meaningful consultation within organisations can bring significant benefits.
“Innovation itself should not be seen as a threat, but something to be celebrated” Dominic Holmes
The ethical challenges of using and deploying automation and artificial intelligence are key for business leaders to get to grips with. Leading thinkers give us more of their views
Associate director, business consulting
The fourth industrial revolution has been quite a long-tail transition. From enterprise resource planning systems to warehousing and factory machinery, robots have taken over jobs once done by people. This process is speeding up and the ethical question is how we manage the change.
Statistics show that technology doesn’t cause long-term mass unemployment, so it's unlikely robots will do it all for us in future. The language has changed in the last few years and it's now about augmenting our capabilities, with staff working alongside AI and robotics.
The big question is whether people have the right skills and experience to adapt. It’s similar to any change project where you ask people to work differently: can they do it and, if not, can you train them? If jobs are set to change, you have to ensure people know what’s in it for them. If you strip out the bits that are satisfying or make people feel valued, they will start to feel like an interface between machines.
We advise companies start with values and purpose. If you have a customer service ethos, how does engagement with a touchscreen instead of a person affect that? If the purpose is to reduce cost, or move people into value-added roles, how will you go about it? It all comes back to governance and how you approach the transition. It’s essential to consider your stakeholders and that your actions are aligned to your organisational values.
"If you have a customer service ethos, how does engagement with a touchscreen instead of a person affect that? It all comes back to governance and how you approach the transition.” Neal Dempsey
Futurist and industry adviser
Balancing profit with purpose will continue to go a long way in the technological age. Certainly, this will help business leaders assemble a component essential to success in this area: high-calibre workforces. Multiple studies reveal that young talent is drawn to businesses with a social conscience.
When we talk of talent, we talk of skills and I’m constantly asked about how workforces can stay future proof in an AI-heavy, gig economy. AI may threaten some roles but less than 5% of jobs can be fully automated, according to the McKinsey Global Institute.
With most others, only 30% of our tasks can be taken on by an AI-powered algorithm, which in turn would boost productivity of the workforce, freeing up time for us to focus on key decision making and building relationships.
Employees can focus on more strategic, cognitively challenging tasks and need an open mindset to embrace AI. Since one out of three core job skills in 2015 will no longer be key by 2020, workers need to enhance their digital skill sets and literacy for an AI world.
Digital sectors are creating jobs nearly three times faster than the wider economy. While this means more demand for data scientists, AI-researchers and mixed reality experts, inherently human traits, like intuitive decision making and empathy, will remain on the list of desired talents.
C-suite leaders with vision and sound ethical standards will help their employees navigate the automation and digital transformation process effectively and capably.
“C-suite leaders with vision and sound ethical standards will help their employees navigate the automation and digital transformation process.” Shivvy Jervis
For more on how to prepare your AI strategy and plan your internal capabilities, contact Neal Dempsey.