Governing the responsible use of AI: the important role IC has to play
Do you know how many people in your organisation are using generative AI to help them do their jobs? According to one survey, almost half of professionals use it at work, but nearly 70% of them do so without their boss’s knowledge. Some companies have felt the risk of their staff inadvertently leaking confidential data is so high they’ve banned their workforces from using GenAI models like ChatGPT. Yet, at the same time, two thirds of IT leaders say that by the end of this year they’ll have integrated some form of generative AI into their work processes. Like it or not, AI in the workplace is here to stay. And the general consensus seems to be to lean in, accept and embrace it – or be left behind. But who’s making sure it does no harm? Who’s governing its use? And who’s ensuring staff are trained in how to incorporate it into their jobs ethically and safely?
Here in the UK, the government has said regulation will come but, for now, any companies wondering how to make sure they’re doing it right just have a lot of guidelines from a lot of different regulators to wade through. Businesses are figuring out their own policies, sizing up the pitfalls and opportunities and, we hope, being mindful of ethical concerns. The natural leads on AI policy in most businesses are likely to be the leadership team, HR and IT, but we believe Internal Comms also has a significant role to play. So, here are our suggestions on what you should be doing if you’re keen to make sure your organisation embraces AI the right way.
· Find out who’s been tasked with developing an AI policy in your business – and if it’s no-one’s role, set up your own cross-departmental task force. Lots of people are worried about what AI is going to do to their jobs, so for an AI policy to be accepted and adhered to within any business, staff buy-in is essential. And who better than Internal Comms to make sure all stakeholders are represented in any early discussions that take place? We’re perfectly positioned to make sure as many voices as possible are represented in the planning and implementation phase. If you’ve found yourself the lead, the CIPD has some useful advice on what to consider when developing an AI policy.
· Educate yourself. Knowing how to use generative AI platforms is fast becoming a key professional skill and it’s why, as end users, we should all understand the different offerings out there, particularly their advantages and limitations. We’re each responsible for what we create and for any potential impact of that too – i.e. if what you put into ChatGPT is viewed by your company as a data breach, you could end up unemployed. So make sure you experiment safely with minimal risk to colleagues or proprietary data. Educate yourself on the bigger picture too, so that you understand the balance not just between fostering innovation and managing risk but the ethical and philosophical concerns. The IoIC’s white paper is a good starting point for this.
· Educate the board and other departments. It’s important all board members have a proper understanding of AI and its ethical implications. Set up training sessions or workshops to familiarise them with essential AI concepts, such as algorithmic bias, privacy concerns and the potential impact on employment.
· Develop your own AI strategy for internal communication – and help other departments do the same. The implementation of AI is too important to just be decided by IT or senior leadership. Ideally, every department should hold brainstorming and planning sessions to agree the best way to integrate AI within their part of the business.
· Share your new AI policy broadly across the organisation and make sure it’s accompanied by suitable training and education. This could include, for example, hosting a Q&A with representatives from your IT, legal, marketing and HR teams to help educate employees on the opportunities and challenges ahead. Use real world examples that have been in the media – court cases for example – as case studies to make it easier for the audience to understand the potential implications.
With the use of generative AI still in its infancy it’s so important we get the right processes in place now. The skillset, contacts and perspective of Internal Comms means we’re uniquely placed to convene and facilitate the conversations that need to be had, not just to make sure the employee voice is heard but so that organisations can be sure that when they bring generative AI into their workplace they are doing so in a way that is safe, ethical and responsible.