About Us
Join us
Career
Client portal

AI - Life after Implementation

In this paper we explore life for the leadership team after implementation, how the day-to-day leadership may be affected by the emergence of AI in its various forms. In doing so we explore two key themes:

How the core activities ascribed to leaders in our existing pre-AI world may change, specifically their role in two critical activities, decision making and overseeing the human-centric elements of organisational operation.

Leaders spend much of their time making decisions – typically their key role is making the biggest and most significant decisions that determine the overall trajectory of the organisation (i.e., strategic planning). Added to this will be an ongoing stream of more mundane, routine decisions that ensure the smooth running and development of the organisation. Indeed, a leader’s ongoing promotion through the organisation to a senior position is likely to be evidenced by a stream of good, well-informed decisions with positive implications for the organisation.

Additionally, we see leaders charged with developing the human capital of organisations, acting as role models for organisational behaviour, building a supportive culture, overseeing its often the complex political and social context and ensuring the organisation develops its human based capabilities and skills in the right way, equipping it to operate in the increasingly dynamic markets and industries we encounter today.

Our second key theme will be:

How these two sets of activities are likely to evolve in terms of the relationship between leaders and AI, specifically if and how leaders are enhanced by the presence of AI, or as some commentators are already suggesting, they are replaced.

 

The AI enhancement perspective

 

The initial impetus for the deployment of AI in most organisations is likely to be focused firmly on the enhancement agenda. Few if any organisations are likely, as a first step, to leverage the technology to remove huge swaths of the workforce. The ‘optics’ won’t be good, and the resulting climate of fear and uncertainty is unlikely to be best starting point for leaders charged with creating the right organisational environment for the widespread development and deployment of AI in their organisations.

So, what does this ‘enhancement’ agenda mean for leaders? Most leaders are already likely to be interacting regularly with AI to improve their decision making, though not necessarily in a business context. Many of the mundane tasks we undertake every day, from searching the internet, finding a driving route to choosing what to do in our downtime are already AI enabled to some extent. So it is not difficult to imagine such applications being used in a business setting.

Leaders can already ask ChatGPT (or Microsoft Co-pilot etc.) for options and advice on how to address a problem or how to improve a presentation. Of course, such applications, as they exist today may be of relatively limited value. But we are already seeing more sophisticated, business focused applications emerging, apps like BloombergGPT, trained on highly valuable proprietary data, to assist in business related decision making.

Such apps are likely to proliferate, focused on different sectors (e.g. PharmaGPT) and business functions (e.g. MarketingGPT). We won’t have to wait long for a LeadershipGPT. As a next step, some of the more mundane, routine decision making of leaders may be the first to be automated by AI. Reviewing operational data, actioning routine and well-established decision making protocols can now be handed over to increasingly ‘intelligent’ software as an extension of the expert systems they already utilise. Even in human relations, an issue that most leaders may feel will always be the province of human actors, we are already seeing some interesting developments.

The oversight and development of more junior management, staff and workers is already being enhanced by applications like Amber, with the prospect of future applications that can not only collect and analyse real time, indepth data about the mood and disposition of a workforce (with data either provided by the staff themselves through surveys, or perhaps more ominously collected from the ongoing interactions of staff with various digital media e.g. webcam based eye tracking in virtual meetings), but crucially, can also take action.

One of the biggest challenges for most leaders has always been identifying and reaching ‘the little guy’, that individual way down in the organisation who despite their lowly title, has a profound impact of the operation of staff around them, and ultimately is a key figure in the distributed company leadership. The challenge has typically been identifying and developing them, with sufficiently granular attention to make interactions meaningful.

In theory such interactions can now be automated with a previously impossible level of focus on the individual. The importance of a personalised approach, particular in terms of incentives, has long been evidenced in academic literature, but the difficulty has been putting it into practice, particularly in very large organisations. In theory (a theory which is currently being tested) such new AI enabled applications might be able to monitor individual levels of productivity, identifying specific learning and development goals and most importantly offer detailed individual level feedback and suggestions for improvement. Automated emails might be produced that reach each employee with a personalised message, based on their specific performance, offering tailored advice and guidance. And such apps can do this 24 hours a day, 365 days a year, to workforces of any size, however geographically diverse.

How will workers respond to such feedback? Will they know it’s generated and enacted by AI?

How leaders choose to present the involvement of AI in such interactions will be critical. Perhaps the approach of publicly minimising the role of AI in such decision making will be natural to many leaders, feeling that staff would prefer such decisions and interactions not to be left in the hands of algorithms. But interestingly there is already some evidence that workers might in some cases actually prefer for such decisions and interactions to be AI rather than human driven.

Research centred on what is being called the ‘theory of machine’ suggests that workers may feel that the involvement of AI is beneficial, as being rational, fair, unbiased, always available and fast. Whatever the decision, to publicly minimise the role of AI in such interactions, or to prominently announce its role, honesty and transparency will be critical.

Of course, such activities already happen to some extent – it is the province of middle managers. They have tended to be the interface between the most senior management and those lower down the organisation, bringing that personal touch based on an intimate knowledge of the immediate staff and workers they deal with daily.

Perhaps then it is this group then who have most to fear from the development of AI. Since the days of Robert Townsend and his seminal text, ‘Up the Organisation’, we have often viewed middle managers rather dimly, as potential obstacles to the free flow of information around organisations, as an unnecessary and highly expensive tier of workers who in the ‘flattened’ organisations of the future, should be the first to get flattened. Perhaps it is this tier then rather than those at the bottom of the organisation who have most to fear from the initial widespread emergence of AI in the business environment. The opportunity for radical organisational restructuring, hugely significant cost reductions and the fulfilment of the promise of the flat organisation (e.g., the faster movement of information up and down the organisation) are tempting goals for the senior leadership team.

Though perhaps such leaders should be cautious, for as well as providing an interface between the top and bottom of the organisation, the middle management tier is typically also the training ground for the senior leaders of the future. Moving through the middle management strata provides a well-structured and established development path for those expected to take the top jobs in a few years’ time.

Destroying the middle management strata may thus risk not only losing pivotal organisational talent but also the development path and opportunities that such talent relies upon for their career development.

A further area in which we are likely to see the rapid emergence of AI will be those big decisions that determine the overall direction of the organisation – strategic planning.

Again, the focus is likely, at least initially, to be on enhancement. Strategic decisions tend to be the most difficult, most complex, and most impactful that leaders make. The process of strategic planning is beset with challenges, from collecting, managing, and analysing vast amounts of often contradictory data, our natural human cognitive limits which makes digesting and using these data resources difficult, to navigating the highly complex social and political organisational context in which strategic plans need to be implemented.

Our senior leaders already have resources to help them in this challenging task, typically teams of lower level functional and divisional leaders, or even dedicated strategic planning staff whose job it is to collect and analyse the data, develop strategic options, assess them, make recommendations and finally to help implement the resulting plans.

Given AI’s ability to collect data, analyse and identify patterns and potentially to develop strategic options, it would seem that senior leaders might be able to do without  these teams, exploiting the software’s ability to undertake the tasks far faster and more comprehensively than a human team might. Suitable software might also be able to draw on the emerging AI based resources from external organisations such as the Bloomberg application mentioned earlier. We will be overflowing with relevant data, and most crucially, AI can help us interpret and make sense of it – the ultimate manifestation of the enhancement agenda. But given AI’s potential to collect and analyse data, generate options and potentially even select the ‘right’ one, we might even ask what we need those most senior leaders for?

 

Will we quickly move from enhancement to replacement?

 

As most leaders will probably agree, there rarely is a single ‘right’ answer. Organisations are never homogeneous and given the complex range of (often competing) agendas evident in most, such decisions are typically compromises, responding as much to cultural, political and social pressures (typically difficult to quantify and express in purely data terms) as purely economic ones.

An AI generated strategic plan may be beautiful in its conception and use of all relevant data, but almost impossible to implement given the subtleties of a specific organisational context. Equally many leaders might agree that developing strategy is the relatively easy bit, implementing it, getting the entire organisation to actively support it and ‘make it happen’, is the challenging part of strategy.

So how will workers respond to being asked to implement plans developed in a ‘black box’ by algorithms, plans whose success or failure will ultimately determine the security of their jobs?

Trust in the efficacy of the AI driven approach to strategy will take time to develop. For the near future then, senior leaders are likely to have a pivotal and ongoing role in strategic planning, reality checking AI generated strategic options in the context of their specific organisation, and being the human face of the plan, fostering motivation and driving implementation.

But again, leaders are faced with a dilemma – to what extent do they publicly acknowledge the role of AI in developing top level strategy – those most important decisions, traditionally made by the most experienced leaders who are paid the highest salaries specifically to make them? As with the more routine decision making mentioned earlier, transparency and honesty with the workforce (and other stakeholders e.g. shareholders) will be crucial.

 

AI – a new type of follower

 

Finally, in the AI enabled firms of the future, who exactly will leaders be leading? As routine jobs are automated and middle management tiers are potentially reduced in size, who is left?

Is AI a new category of ‘follower’, and if so, what does this follower ‘need’ from its leader? It certainly does not need the motivation, emotional support, guidance and vision of its human counterparts. It doesn’t need to be told its ‘doing great’, does not need a sense of meaning or to feel part of a team, at least until it develops some level of emotional intelligence (and if we do get to this point, we will have far bigger problems to deal with).

An interesting perspective might be provided if we ask one of the current manifestations of AI, ChatGPT, what its moral and ethical values are. We tried this and received the following response:

“As an AI developed by OpenAI, I don’t have personal beliefs or ethics in the way humans do. My ethical values are aligned with the principles established by my developers at OpenAI”

Without specific input then, AI, like any other piece of software, does not have its own ethical or moral values. All of that power, all of that potential, without a shred of ethical and moral conscience.

The significance of this issue has not gone unnoticed. Last year over 1000 notable signatories, including some of the leading lights from the AI community, signed an open letter asking for all AI labs to immediately pause development for 6 months, stopping the training and development of AI until ethical risks are clearer, and some way of managing them established.

Whilst such guidelines may emerge from regulators eventually, history suggests it is going to take far longer than 6 months for this to happen. Even 6 years sounds optimistic given the typical progress of legislators and regulators, particularly those attempting to establish standards across diverse territories and regions. Different national cultures have their own, historically conditioned, and potentially unique definitions of what good ethics and morals are. Regulators, particularly those operating over national boundaries, face significant challenges then.

Corporations on the other hand tend to be more focused and move faster. In the shorter term then, perhaps beyond any other activity in terms of its importance, it will be the function of organisational leadership to establish the moral and ethical context for the operation of AI. And whilst we would like to assume that corporations might dutifully develop appropriate ethical and moral guidelines we can all be proud of, and follow them, we still operate in a highly competitive business environment, where any opportunity to secure even temporary competitive advantage, cannot be ignored.

Anecdotal evidence suggests that the great ‘pause’ requested last March, did not happen, companies kept on developing their AI apps in their quest for competitive advantage. From what data is collected, how its collected, how its managed, how it is used and processed to how the ‘output’ is applied in the real world, all are issues that have the potential to raise significant moral and ethical issues.

Training in basic ethical and moral behaviour is likely to be a minimum for all staff, particularly intensive for those directly involved in operating the AI infrastructure. Senior leaders need to be participants, not just initiators of programmes for others.

 

Conclusions

 

Over the last three papers we have begun to answer some of the questions most pertinent to leaders as they face the initial onslaught of AI. But many remain:

How will the recruitment of senior leadership need to change in the light of AI? What new competencies and skills will the leaders of tomorrow need? What personalities work well in the new environment? What backgrounds indicate strong potential to lead the AI enabled organisations of the future? How will Boards need to change in their structure, governance, skill sets and interactions with stakeholders inside and outside the organisation? How will non-executive roles need to change? How might organisations best keep pace with and exploit the increasingly rapid technological evolution of AI? Will AI evolve in a controlled, relatively non-competitive environment, with organisations and regulators co-operating in managing its evolution, or in the ‘wild-west’ of minimal regulation, and maximum completion?

Ultimately, at least for the foreseeable future, leaders are faced with some intensely personal questions. What is your vision of AI? You must have one. Are you capable, trained sufficiently and farsighted enough to make these decisions? What are your ethical and moral values and how will you ensure that the AI enabled organisation you run manifests and adheres to them, even in the darkest recesses of the black boxes you have built.