Skip to main content

Five steps leaders can take to implement AI responsibly in your workplace

By Anna Ganley, chief executive of the Society of Authors – the UK’s largest union for writers, illustrators and literary translators – which works to empower and support professional authors across the UK.

In this National Year of Reading, I was pleased to read the ACEVO blog guidelines, which define how contributors can use Generative Artificial Intelligence (GenAI) to brainstorm, draft copy, or create initial structure in written works.

This type of transparency is vital as we all grapple with the threshold of permitted uses. What is ok/not ok. It also demonstrates the importance of human-authored works in an era of AI, and the inherent limitations of AI’s derivative outputs in terms of accuracy and authenticity.

Ethical AI policy

With the mainstream deployment of this new technology, leaders need to ensure that they maintain organisational integrity with appropriate guidelines and guardrails to protect their organisations and their people from the unintended consequences of this emerging technology, to ensure we’re using generative AI for good.

Without safe, responsible and ethical AI policies and guidelines, the risk is that unverified and unlabelled GenAI outputs erode the quality and credibility of our human work.

Of course, GenAI can be used for good: to increase productivity and innovation, but we must retain our human agency and quality control, to ensure that the outputs serve the humans that this technology was created for, not vice versa.

What does responsible and ethical AI look like in practice?

In the creative industries, we’re lobbying for a CLEAR framework to be adopted.  This means:

C – Consent

Buy fairly-trained AI software. By this we mean, AI large language models that have been trained on lawfully-sourced materials, with the creators’ consent. Rather than what has happened to date, which has been the industrial-scale copyright infringement of creative works to train the majority of AI models with permission or payment.  

L – Licensing

Pay for what you use. In publishing, this happens via licensing, which is the lawful, transparent route for AI developers and organisation to access creative works and software that is based on copyright-protected materials, and which ensures fair payment goes back to the creators. ALCS | AI licences

E – Ethical use of AI training data

We need enforceable, ethical standards for the sourcing, curation and application of training data, including a ban on non-consensual scraping and misrepresentation of authorship, as well as impact assessments for GenAI systems to evaluate the cultural, social and employment effects of the deployment within society.

A – Accountability

We need public, auditable records of all licensing arrangements, with clearlabelling of AI-generated content to ensure that we can all distinguish between synthetic content from human-authored content, and for tech companies to be truthful about their carbon footprint. We do not have this in the UK yet, and this is why we (the Society of Authors) is launching a scheme that offers clear labelling options for Human Authored works: Human Authored

R – Remuneration and rights

As you consider how to deploy AI in your organisations, please do so responsibly: with risk assessments, guardrails and consideration of intellectual property rights (copyright) and, how your use of AI supports or undermines creators getting paid for the use of their copyrighted work, on which AI models depend for their derivative outputs.

Whilst talking about rights, let’s not forget human rights, including the right to work, the right to fair remuneration, the right to the protection of our personal data and likeness, which all needs to be enshrined in a national AI governance framework.

Five steps you can take to implement AI responsibly in your workplace:

  1. Review the potential AI use cases and conduct risk assessments and consider how these impact on other policies, such as your sustainability policy.
  2. Implement an ethical AI permitted use and disclosure policy so that you are clear when and where GenAI can and cannot be used.
  3. Support ethical AI companies. Prioritise GenAI tools that enhance and support human creativity rather than those that erode and displace it.
  4. Train your team. Educate yourself and those around to better understand what GenAI is, and how it can be used safely, sparingly and sustainably in your workplace.
  5. And finally… do your bit for humanity. Use ethical AI to support human creativity, and the writers, illustrators, musicians, photographers, translators and others who rely on copyright and licensing to monetise their creative endeavour and – ultimately – to help pay their bills.

Share

Not an ACEVO member?

If you have any queries please email info@acevo.org.uk
or call 020 7014 4600.