• May 21 - 22, 2025
  • ADSM, Abu Dhabi

ICAIMT Proceedings

#ICAIMT2025

International Conference on Artificial Intelligence Management and Trends

Conference Date: May 21, 2025

Abu Dhabi School of Management (ADSM), Abu Dhabi

Article

Incorporation of generative AI into the assessment of a practical advertising production university course – a case study

Philip DennettLondon - Southbank University – UAEAjman, United Arab Emirates - philip.dennett@lsbuac.ae
Published: 01 Sep 2025 https://doi.org/10.63962/TDZT8910
PDF downloadable

Abstract

Students are increasingly using generative AI (artificial intelligence) tools in their university assignments, often running afoul of academic integrity rules. The purpose of this case study was to reduce the risk of academic integrity breaches and integrate AI into elements of an advertising communication course. The changes made included practice and guidance on generative AI models, AI elements incorporated into a major assignment, and assessable personal reflections from students. The methods used were the development and presentation of an assessment brief and associated rubric incorporating the use of AI tools. Preliminary results suggest students have a high level of comfort in the use of generative AI tools and a clear understanding of its place in academic assessments.

Keywords: generative AI, academic integrity, advertising course.
I. INTRODUCTION
Following the COVID-19 pandemic there has been several changes in education and learning, including widespread use of online learning and the use of generative AI tools by both academics and students¹. The Australian government's Tertiary Education Quality and Standards Agency (TEQSA) warns that the use of generative AI is generally not detectable and that it can produce responses for course-specific assessments. This has resulted in higher incidences of academic integrity breaches².

Given both the widespread use and difficulty of detecting AI content, this research seeks to identify a way in which the use of AI tools can be incorporated into a written assessment to overcome these issues.

The scope of the research is to redesign an existing assessment in an undergraduate advertising course to incorporate the use of AI tools and revise the outputs to clearly identify the intellectual efforts of students. This will be of value to other academics who teach courses where a creative output is required.
II. BACKGROUND
Despite a growing body of literature, significant research gaps exist in understanding the impact of AI in university settings³. Generative AI is a deep learning model (DLM) designed to create original content, including written and visual material, in response to a user's request⁴. The availability of DLM's such as ChatGPT have given rise to questions such as the future role of teachers and the design of assessments⁵. Ghanimi and colleagues suggest that rather than focus on the negatives we should separate data gathering tasks (which AI can assist with) and critical thinking and creative tasks (which prove the students understanding of the course material. By working in tandem in this way assessments can become programmatic rather than summative allowing students multiple opportunities to demonstrate their competence⁶.

Lodge, and colleagues⁷ advise that assessments in the age of AI should emphasise appropriate engagement with AI tools, integrated into the learning process. For example, framing the use of AI tools as a learning competency, demonstrated through student reflections⁸. The goal should be to integrate AI into the learning process in a way that reinforces academic integrity and cultivates essential digital literacy skills⁹.

Taking this advice, I decided to review the principal assessment in a course called Production: Creative Advertising that is aimed at second- and third-year undergraduates studying communications.
III. METHODOLOGY
This research uses a case study methodology to assess the design and implementation of an assessment that reduces the vulnerability of it to academic misconduct, and secondly, to incorporate the use of AI as a learning competency into the assessment rubric.

The existing assessment, worth 30% of the overall grade, asked students (in groups) to produce a creative rationale and associated execution based on a client brief. Marks were awarded for the appropriateness of the rationale (to the brief) and the quality of the creative execution. The assessment submission consisted of a completed Creative Rationale template and an example creative execution.

Three problems were identified: There was no assessment of individual input into the finished work; the course asks students to produce a range of creative works, and the existing assessment asks for only one creative example and leaves it up to the students to choose the type and media; there was no mention of generative AI and its acceptable use.

The tools used in this research were an updated assessment brief and an associated rubric. Both these tools were presented in a tutorial environment allowing for student feedback and questions.

The assessment brief asked students to show evidence of research justifying their proposals as well as any working papers or meeting notes showing the development process for the campaign. Additionally, each student in the group also had to separately submit a 250-word reflection discussing their role in the assignment and the use of AI tools in the process of campaign development. This changed the emphasis from the end-product to the process.

Each group was also asked to include the following creative executions:
  • An A4 sized print advertisement
  • Three Instagram posts
  • A storyboard for a 30 second video ad (suitable to be shown on social media).

Students were advised that all elements should be produced to a professional standard and that any images used should be AI generated with prompts used being incorporated in the rationale or appendix.

From an academic integrity viewpoint students were told that AI tools could be used to assist with tasks like brainstorming, structuring, and editing, as long as the final submission is students' own work, and any AI assistance is acknowledged.

The new assessment rubric included an additional section (with a 20% weighting) on AI Use & Critical Engagement. The other criteria were: creative concept; strategic thinking; visual and written execution; and presentation and structure. All the criteria were weighted at 20% each.

Previously the course itself did not have any instruction relating to generative AI, so the following components were added to the course outline:

In week 9, the tutorial introduced two generative AI tools: Chat GPT and Rendernet, plus an activity where students were asked to create a virtual influencer. This task gave students experience in writing prompts and creating images. Both these skills were relevant to the redesigned assessment. Students were also asked to review and reflect on the TEQSA advice to students on the use of artificial intelligence.

The data generated from student reflections in this study are not publicly available due to confidentiality agreements and ethical considerations. Informed consent was obtained to share the example used in figure 1
IV. RESULTS
At the time of writing, we have just completed the week 9 tutorial and briefed the students on the assessment. Insights from this were overall positive. Students expressed surprise that they were to be allowed to use AI tools and appreciated the ethical restrictions imposed. After the session on creating an AI influencer they felt quite comfortable in using AI to support their creativity and based on the AI influencers they created, I felt they now had the technical skill necessary to undertake the assessment tasks.

Once the assessments have been presented and graded, I will use the 250-word reflections by each student to measure their level of engagement with the assessment and their understanding of the use of AI tools in the university context.

Figure 1 below, shows a sample of a student generated image. The prompt this student used was: "Create a realistic image of a beautiful young Australian woman in a university classroom. She has medium length light brown hair and a pleasant expression, wearing a white lace top. There is a whiteboard in the background and there is light coming through the windows from the side. Make it look like it was taken on a film camera, and have the main focus be on the woman with the background slightly out of focus."

Figure 1
Sample student generated AI image
V. CONCLUSION
The assessment design incorporates the use of generative AI tools and provides students with experience in the ethical use of them. The new emphasis on process mitigates the risk of academic conduct breaches and the addition of the individual reflection encourages students to participate in the process. Reviewing these personal reflections will enable me to determine whether the aims of this redesign have been met.

This research will provide a guide to academics who wish to incorporate AI tools into their course assessments. The final results will demonstrate if students are able to responsibly and ethically use AI as an integral part of their assessment writing process. The research also provides a potential model for the teaching of the use AI tools responsibly, including prompting, evaluating outputs, and citing assistance.

This preliminary research can be expanded to include longitudinal studies that measure the effects over successive courses of the students' growing competency in AI use and management. While this study was situated in the Advertising discipline similar studies could be carried out in other creative areas to determine the more widespread applicability of this model.

REFERENCES

[1] Baidoo-anu, D., & Owusu Ansah, L. (2023). Education in the Era of Generative Artificial Intelligence (AI): Understanding the Potential Benefits of ChatGPT in Promoting Teaching and Learning. Journal of AI, 7(1), 52-62. https://doi.org/10.61969/jai.1337500

[2] https://www.teqsa.gov.au/guides-resources/higher-education-good-practice-hub/artificial-intelligence

[3] Russell Butson, R., & Rachel Spronken-Smith, R. (2024) AI and its implications for research in higher education: a critical dialogue, Higher Education Research & Development, (43)3, 563-577, DOI: 10.1080/07294360.2023.2280200

[4] Gui, J., Sun, Z., Wen, Y., Tao, D., & Ye, J. (2021). A review on generative adversarial networks: Algorithms, theory, and applications. IEEE Transactions on Knowledge and Data Engineering. Doi: 10.1109/TKDE.2021.3130191.

[5] Ghanimi, R., Ghanimi, I., Al Karkouri, A., Essahb, H., El Janati, B., Ghanimi, F. (2024). Generative artificial intelligence (AI) on the field of education and learning: threats, limitations and future directions, Journal of Innovation and Digital Health 1(2), 79-84.

[6] van der Vleuten, C., Lindemann, I., & Schmidt, L. (2018). Programmatic assessment: The process, rationale and evidence for modern evaluation approaches in medical education, Medical Journal of Australia, 209(9), 386–388.

[7] Lodge, J. M., Howard, S., Bearman, M., Dawson, P, & Associates (2023). Assessment reform for the age of Artificial Intelligence. Tertiary Education Quality and Standards Agency.

[8] Chiu, T., Ahmad, Z., Ismailov, M., Sanusi, I. (2024). What are artificial intelligence literacy and competency? A comprehensive framework to support them. Computers and Education Open, Vol 6, June. https://doi.org/10.1016/j.caeo.2024.100171

[9] Corbin, T., Dawson, P., Kelli Nicola-Richmond, K., Partridge, H. (2025). 'Where's the line? It's an absurd line': towards a framework for acceptable uses of AI in assessment, Assessment & Evaluation in Higher Education, DOI:10.1080/02602938.2025.2456207