Guidance on the use of Artificial Intelligence (AI) in BFI funding applications and funded projects

This guidance document refers specifically to assistive and generative AI in BFI funding applications and the projects we support.

Find more information in our report: ‘AI in the Screen Sector: Perspectives and Paths Forward’.

Introduction

The focus of this guidance is the use of assistive and generative AI (Gen-AI) within BFI funding applications and funded projects. We have aligned our approach with guidance from other UK and international funding bodies. 

AI technologies are evolving rapidly and there are both opportunities and risks. The BFI is committed to monitoring these developments and will update this guidance when necessary to ensure it remains aligned with responsible and ethical AI practice.

We do not currently use AI to assess applications. While we may consider using AI for very basic eligibility checks in the future, the creative content of the application will always be assessed by human beings using the assessment criteria outlined in the guidelines for each fund. 

We are not a regulating body and this guidance is not intended to regulate AI usage across the industry or enforce rights management. However, we require all BFI funding recipients to sign a binding warranty to the BFI that no third party rights were, or will be, infringed in the application or project. We expect the awardee to uphold this warranty in full at all times, and this extends to any actual or proposed usage of AI technologies in either application or project.

Risks to be aware of when using Gen-AI

Gen-AI is a broad term which covers models capable of generating diverse and seemingly original content, such as text, images, video, audio, or code.

The outputs produced by Gen-AI tools can appear to be highly plausible, but may contain inaccuracies, bias and toxicity, copyright infringement and other problems.

If you’re going to use an output from a Gen-AI you should carefully fact-check and edit it first. Gen-AI tools are powerful but not perfect; humans should be the final check point.

If you are considering using Gen-AI, you should be aware of the following risks:

  • inaccuracies and misinformation: outputs are not 100% accurate; they may provide false information or use sources you would not normally trust. Gen-AI tools also cannot reason or understand context.
  • bias: Gen-AI learns from the data inputted, and where there is an imbalance or bias in the data, it may produce outputs which have discriminatory effects.
  • fraud: fraudsters can abuse Gen-AI by falsifying content such as financial documentation or correspondence, with intent to steal money and identity information
  • ownership: outputs may include copyright works without permission, as unlicensed content is used to train some Gen-AI services, infringing the rights of the creator or rights holder, bringing potential legal claims
  • data breaches: Gen-AI can expose data it has learned, for example, revealing sensitive or confidential information one individual has input, to generate content for another. Also, most Gen-AI companies store the data you input in their network, which may constitute a breach where personal data is concerned

How the use of AI is considered in BFI funding applications

We do not prohibit the use of AI in funding applications, or in the projects we support, but we do require applicants to be transparent about the use of any AI, both assistive and generative, and to operate within the law when using it. BFI funding applicants and awardees are responsible for ensuring full compliance with intellectual property and copyright laws including obtaining the necessary permissions and consulting with any affected parties.

All BFI funding applications contain questions on your use of AI, both in your project and in the creation of your application form. These questions help us to understand how AI is being used across our funding programmes. 

The majority of our funding programmes are highly competitive. Using AI in your project, or to complete your application, could result in a project or application that looks remarkably similar to others. Losing the creative uniqueness of your project or application may mean it is less likely to stand out within the context of a competitive fund.

Using AI to generate, or assist with, writing an application

We want to ensure that our funding applications are open and inclusive. We recognise that some people, including those with certain disabilities or access requirements, may have been disadvantaged in making an application in the past and we welcome assistive AI technologies in supporting these applicants to complete funding applications. 

For those using AI technologies to assist with their applications , please research the ethical and legal operation of the AI you are using. Be aware of how the AI is using the information you put into it, or you may find your unique ideas being suggested to other people for their applications if they are using the same AI tool. Conversely, you may find that the AI is suggesting ideas to you that have been copyrighted by other people.

Using AI to generate, or assist with, the content of your project

We do not currently have assessment criteria specific to AI in any of our funding guidelines. The use of AI will be considered as part of the overall assessment of any project against the specific fund assessment criteria. All assessment criteria align with the objectives, outcomes and principles in our National Lottery strategy.

You should consider the following questions if you are using AI in your project and ensure that, where relevant, you are noting responses to them in your application:

  • does your use of AI adhere to all relevant legal obligations?
  • do you know what the AI tool you are using will do with the information you put into it – will it store it, train itself with it or share it with others? With this in mind, do you have the necessary rights to the information or material to input it into the AI?
  • are you being transparent about your use of AI with:
    • everyone working on your project (including getting consent where necessary)
    • funders
    • audiences
  • do you, or your project staff, have the necessary skills, technical expertise and governance structures to use AI responsibly and ethically?
  • is your use of AI supporting, rather than replacing, human creativity and ensuring that technology enhances storytelling while remaining secondary to artistic integrity?
  • is the use of AI in your project enabling the creation of unique material that would not otherwise be commercially viable and ensuring that public funding supports content the market alone cannot deliver?
  • is the use of AI in your project promoting new and diverse creative voices who may otherwise face barriers?
  • has AI enabled you to meaningfully represent diverse cultural perspectives and knowledge systems? If so, have you included safeguards to prevent misrepresentation of cultural elements and included appropriate consultation with the relevant cultural knowledge holders?
  • are the cost savings from your use of AI enabling a more ambitious creative outcome? Is it reducing the need for public funding or enhancing the overall quality and scope of the project?
  • have you identified any potential risks associated with AI use, such as technical failures or unintended outputs? Have you provided clear mitigation strategies?

We are aware that AI can be introduced to a project post application stage and expect awarded applicants to make the BFI aware of its usage throughout the life of the awarded project.