Risk Regulation of Generative Artificial Intelligence in the Australian Government: the Case of Microsoft Copilot

Authors

DOI:

https://doi.org/10.71265/esg7cf11

Keywords:

generative Artificial Intelligence, risk regulation, mitigation measures, fact-checking, chatbot

Abstract

In many countries, risk regulation is central to AI regulation. We examine generative AI (genAI) risk regulation in Australia through a case study of the trial deployment of Microsoft Copilot in government agencies. Risk mitigation depended on end-users’ responsibility for human review and fact-checking, readiness testing, and contractual assurance from vendors, but largely ignored the impact on team dynamics and long-term implication on human abilities. Three areas of improvement were identified: strengthening fact-checking and review by users lacking in time, knowledge and experience; addressing the impact on team dynamics and human abilities; and measures for impending uses of genAI systems as internal and public-facing government chatbots which are anchored on government-mandated collaboration among developers, deployers and users.

Downloads

Download data is not yet available.

Author Biographies

  • Jayson Lamchek, Deakin University

    Jayson Lamchek is Research Fellow at the Law School, Deakin University, Melbourne, Australia.

  • Van-Hau Trieu, Deakin University

    Van-Hau Trieu is Associate Professor in Information Systems at the Business School, Deakin University, Melbourne, Australia.

References

Abubakar Abid, Maheen Farooqi and James Zou, ‘Persistent Anti-Muslim Bias in Large Language Models’ (arXiv, 18 January 2021) <http://arxiv.org/abs/2101.05783>.

Maryam Alavi, Dorothy E Leidner and Reza Mousavi, ‘A Knowledge Management Perspective of Generative Artificial Intelligence’ (2024) 25 Journal of the Association for Information Systems 812.

Anthony Albanese, ‘Australian Government Collaboration with Microsoft on Artificial Intelligence’ (Media Release, 16 November 2023) <https://www.pm.gov.au/media/australian-government-collaboration-microsoft-artificial-intelligence>.

Australian Communications and Media Authority, ‘Report to Government on the Adequacy of Digital Platforms’ Disinformation and News Quality Measures’ (2023).

Beatriz Botero Arcila, ‘AI Liability in Europe: How Does It Complement Risk Regulation and Deal with the Problem of Human Oversight?’ (2024) 54 Computer Law & Security Review.

Australian Government, National Framework for the Assurance of Artificial Intelligence in Government (2024) <https://www.finance.gov.au/sites/default/files/2024-06/Nationalframework-for-the-assurance-of-AI-in-government.pdf>.

Australia’s Chief Scientist, Generative AI: Language Models and Multimodal Foundation Models (Rapid Response Information Report 2023).

Anna Veronica Banchik, ‘Disappearing Acts: Content Moderation and Emergent Practices to Preserve at-Risk Human Rights–Related Content’ (2020) 23 Information, Communication & Society 1339.

Hind Benbya, Franz Strich and Van-Hau Trieu, ‘Accounting for Unintended Consequences in IS Research: A Call to Action’ (2025) 29 Australasian Journal of Information Systems 6063.

Emily M Bender and others, ‘On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?’, in Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (ACM 2021) 610.

Sebastian G Bouschery, Vera Blazevic and Frank T Piller, ‘Augmenting Human Innovation Teams with Artificial Intelligence: Exploring Transformer-Based Language Models’ (2023) 40 Journal of Product Innovation Management 139.

Leonard Boussioux and others, ‘The Crowdless Future? Generative AI and Creative Problem-Solving’ (2024) 35 Organization Science 1589.

Anu Bradford, Digital Empires: The Global Battle to Regulate Technology (OUP 2023).

Zenan Chen and Jason Chan, ‘Large Language Model in Creative Work: The Role of Collaboration Modality and User Expertise’ (2024) 70 Management Science 9101.

Mark Coeckelbergh, ‘Ethics of Artificial Intelligence: Some Ethical Issues and Regulatory Challenges’ [2019] TechReg 31.

Sam Cooper, ‘Navigating the Ethics of Assigning Microsoft Copilot Licences’ (Changing Social, 12 September 2023) <https://www.changingsocial.com/blog/copilot-ethics/>.

Ry Crozier, ‘Home Affairs Blocks Public Servants from Using ChatGPT’ (iTnews, 23 May 2023) <https://www.itnews.com.au/news/home-affairs-blocks-public-servants-from-using-chatgpt-596130>.

CSIRO, ‘Responsible AI Pattern Catalogue’ (28 September 2023) <https://www.csiro.au/en/research/technology-space/ai/Responsible-AI/RAI-PatternCatalogue>.

Jeffrey Dastin, ‘Insight - Amazon Scraps Secret AI Recruiting Tool That Showed Bias against Women’ (Reuters 11 October 2018) <https://www.reuters.com/article/idUSKCN1MK0AG>.

Department of Industry, Science and Resources, Australia’s AI Ethics Principles (2022) <https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-framework/australias-ai-ethics-principles>.

Department of Premier and Cabinet Office of Digital Government, WA Government Artificial Intelligence Assurance Framework.

Robert Diab, ‘Too Dangerous to Deploy? The Challenge Language Models Pose to Regulating AI in Canada and the EU’ (1 January 2024) <https://papers.ssrn.com/abstract=4680927>.

Digital Platform Regulators Forum, Joint Submission to Department of Industry, Science and Resources – Safe and Responsible AI in Australia Discussion Paper (2023).

Digital Transformation Agency, ‘Australian Government Trial of Microsoft 365 Copilot: Summary Report’ (n.d.) <https://www.digital.gov.au/initiatives/copilot-trial/summary-evaluation-findings/cts-executive-summary>.

Digital Transformation Agency, Interim Guidance for Agencies on Government Use of Generative Artificial Intelligence Platforms (Australian Government 2023) <https://www.dta.gov.au/help-and-advice/technology-and-procurement/generative-ai/interimguidance-agencies-government-use-generative-ai-platforms>.

Virginia Eubanks, Automating Inequality (St Martins Press 2018).

Eun Seo Jo and Timnit Gebru, ‘Lessons from Archives: Strategies for Collecting Sociocultural Data in Machine Learning’, in Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (ACM 2020) 306.

Sophie Farthing and others, Human Rights and Technology Final Report 2021 (Australian Human Rights Commission 2021).

Xavier Ferrer and others, ‘Discovering and Categorising Language Biases in Reddit’ (arXiv, 13 August 2020) <http://arxiv.org/abs/2008.02754>.

Matthew Finnegan, ‘M365 Copilot, Microsoft’s Generative AI Tool, Explained’ (Computerworld Australia, 1 November 2023) <https://www.computerworld.com/article/3700709/m365-copilot-microsofts-generative-ai-tool-explained.html>.

Samuel Gehman and others, ‘RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models’ (arXiv, 25 September 2020) <http://arxiv.org/abs/2009.11462>.

Asress Adimi Gikay, ‘Risks, Innovation, and Adaptability in the UK’s Incrementalism versus the European Union’s Comprehensive Artificial Intelligence Regulation’ (2024) 32 International Journal of Law and Information Technology 1.

Alessandra Gomes, Dennys Antonialli and Thiago Dias Oliva, ‘Drag Queens and Artificial Intelligence: Should Computers Decide What Is “Toxic” on the Internet?’ (InternetLab, 28 June 2019) <https://internetlab.org.br/en/news/drag-queens-and-artificial-intelligence-should-computers-decide-what-is-toxic-on-the-internet/>.

Philipp Hacker, Andreas Engel and Marco Mauer, ‘Regulating ChatGPT and Other Large Generative AI Models’ in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (ACM 2023) 1119.

Gina Helfrich, ‘The Harms of Terminology: Why We Should Reject so-Called “Frontier AI”’ [2024] AI and Ethics <https://doi.org/10.1007/s43681-024-00438-1>.

Lily Ballot Jones, Julia Thornton and Daswin De Silva, ‘Limitations of Risk-Based Artificial Intelligence Regulation: A Structuration Theory Approach’ (2025) 5 Discover Artificial Intelligence.

Margot E Kaminski, ‘Regulating the Risks of AI’ (2023) 103 Boston University Law Review 1347.

Sebastian Krakowski, ‘Human-AI Agency in the Age of Generative AI’ (2025) 35 Information and Organization 100523.

Felix Kreuk and others, ‘AudioGen: Textually Guided Audio Generation’, International Conference on Learning Representations, Kigali, May 2023.

Hao-Ping (Hank) Lee and others, ‘The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects from a Survey of Knowledge Workers’ (ArXiv, 13 January 2025).

Lauren Leffer, ‘Humans Absorb Bias from AI—And Keep It after They Stop Using the Algorithm’ (2023) Scientific American <https://www.scientificamerican.com/article/humans-absorb-bias-from-ai-and-keep-it-after-they-stop-using-the-algorithm/>.

Li Lucy and David Bamman, ‘Gender and Representation Bias in GPT-3 Generated Stories’ in Nader Akoury and others (eds), Proceedings of the Third Workshop on Narrative Understanding (ACL 2021) 48.

Yitong Li and others, ‘Video Generation From Text’, in Proceedings of the AAAI Conference on Artificial Intelligence (2018) 7149.

Weng Marc Lim and others, ‘Generative AI and the Future of Education: Ragnarök or Reformation? A Paradoxical Perspective from Management Educators’ (2023) 21(2) International Journal of Management Education 100790.

Andrew Lohn, ‘Hacking AI: A Primer for Policymakers on Machine Learning Cybersecurity’ (Center for Security and Emerging Technology 2020) <https://cset.georgetown.edu/publication/hacking-ai/>.

Bernard Marr, ‘Online Education and Generative AI: Welcome to the Age of Virtual AI Tutors’ (Forbes, 6 June 2024) <https://www.forbes.com/sites/bernardmarr/2024/06/06/online-education-and-generative-ai-welcome-to-the-age-of-virtual-ai-tutors/>.

Christopher T Marsden and Jeannie Marie Paterson, ‘Generative AI Regulation in the UK and Australia: Comparing Two National Attempts at Un-Regulation’, IET Conference Proceedings (The Institution of Engineering and Technology 2025).

Pedro Vitor Marques Nascimento and others, ‘The Future of AI in Government Services and Global Risks: Insights from Design Fictions’ (2025) 13 European Journal of Futures Research 1.

Mechanics Team, ‘How Microsoft 365 Copilot Works’ (Medium, 23 May 2023) <https://officegarageitpro.medium.com/how-microsoft-365-copilot-works-f3f46f98c9ff>.

Jakob Mökander and others, ‘Auditing Large Language Models: A Three-Layered Approach’ (2023) 3 AI and Ethics 361.

Emmanuel Moss and others, ‘Assembling Accountability: Algorithmic Impact Assessment for the Public Interest’ (Data & Society 2021).

New South Wales Government, Artificial Intelligence Assurance Framework (2022) <https://www.digital.nsw.gov.au/sites/default/files/2022-09/nsw-government-assurance-framework.pdf>.

Colin van Noordt, Rony Medaglia and Luca Tangi, ‘Policy Initiatives for Artificial Intelligence-Enabled Government: An Analysis of National Strategies in Europe’ (2025) 40 Public Policy and Administration 215.

Office of Digital Government, ‘Large Language Models: WA Public Sector Guidance’ (Government of Western Australia 2024).

Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (Crown 2016).

Carsten Orwat and others, ‘Normative Challenges of Risk Regulation of Artificial Intelligence’ (2024) 18 NanoEthics.

Regine Paul, ‘European Artificial Intelligence “Trusted throughout the World”: Risk-Based Regulation and the Fashioning of a Competitive Common AI Market’ (2024) 18 Regulation & Governance 1065.

Stefano Rizzi and others, ‘Conceptual Design of Multidimensional Cubes with LLMs: An Investigation’ (2025) 159 Data & Knowledge Engineering 102434.

Matthew U Scherer, ‘Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies’ (2016) 29 Harv JL & Tech 353.

Renee Shelby and others, ‘Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction’, in Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society (ACM 2023) 423.

David Silverman, Interpreting Qualitative Data (4th edn, SAGE 2011).

N Sincovics, ‘Pattern Matching in Qualitative Analysis’ in Catherine Cassel, Ann L Cunliffe and Gina Grandy (eds), The Sage Handbook of Qualitative Business and Management Research Methods (SAGE 2018).

Mona Sloane and Elena Wüllhorst, ‘A Systematic Review of Regulatory Strategies and Transparency Mandates in AI Regulation in Europe, the United States, and Canada’ (2024) 7 Data & Policy e1.

Jared Spataro, ‘New Agent Capabilities in Microsoft Copilot Unlock Business Value’ (Microsoft 365 Blog, 21 May 2024) <https://www.microsoft.com/en-us/microsoft-365/blog/2024/05/21/new-agent-capabilities-in-microsoft-copilot-unlock-business-value/>.

Van-Hau Trieu, A Burton-Jones and S Cockcroft, ‘Applying and Extending the Theory of Effective Use in a Business Intelligence Context’ (2022) 46 MIS Quarterly 645.

Laura Weidinger and others, ‘Ethical and Social Risks of Harm from Language Models’ (ArXiv, 8 December 2021) <https://arxiv.org/abs/2112.04359>.

Chloe Wittenberg and others, ‘Labeling AI-Generated Content: Promises, Perils, and Future Directions’ (2024) MIT Exploration of Generative AI.

UN Special Rapporteur and others, ‘Joint Declaration on Freedom of Expression and Gender Justice’.

Robert K Yin, Case Study Research: Design and Methods (3rd edn, SAGE 2003).

Qianli Yuan and Tzuhao Chen, ‘Holding AI-Based Systems Accountable in the Public Sector: A Systematic Review’ (2025) 48 Public Performance & Management Review 1.

Mimi Zhou and Lu Zhang, ‘Navigating China’s Regulatory Approach to Generative Artificial Intelligence and Large Language Models’ (2025) 1 Cambridge Forum on AI: Law and Governance e1.

Downloads

Published

14-01-2026

Issue

Section

Articles

How to Cite

Lamchek, J., & Trieu, V.-H. (2026). Risk Regulation of Generative Artificial Intelligence in the Australian Government: the Case of Microsoft Copilot. Technology and Regulation, 2026, 18-37. https://doi.org/10.71265/esg7cf11

Similar Articles

1-10 of 52

You may also start an advanced similarity search for this article.