The hidden risks of AI (and how to avoid them)

The hidden risks of AI (and how to avoid them)

Everyone’s using AI, but are they using it responsibly? Learn how to build ethical, trustworthy AI solutions.

AI capabilities are rapidly becoming part of everyday tools from Copilot, ChatGPT and Azure AI services.

But as AI becomes embedded into data workflows, responsible use of AI is no longer optional. Decisions made during data preparation, modelling, and deployment directly influence fairness, transparency, and trust in the systems we build.

In this webinar, Senior AI Engineer, Lewis Prince, explores what Responsible AI actually means in day-to-day data work and how you can translate high-level ethics principles into practical design decisions.

Rather than treating AI ethics as an abstract topic, this session focuses on real scenarios that occur during data and AI projects, including how bias can enter models unintentionally, how governance should apply to AI features in Fabric and Power BI, and how Microsoft’s Responsible AI Standard influences solution design.

Using practical examples, Lewis will show how ethical risks emerge across the AI lifecycle, and how you can identify and mitigate them before they become real problems.

In this session you’ll learn:

  • How ethical risks arise in real-world data and AI solutions
  • How bias can be introduced through data, modelling, and automation
  • Governance considerations when using AI features in Fabric and Power BI
  • What Microsoft’s Responsible AI principles mean in practice
  • A clear framework for evaluating AI features responsibly

This session is ideal for data professionals, engineers, analysts, architects or anyone is involved in building AI-powered solutions that are trustworthy and responsible.

REGISTER
Share this content:

Contact us about The hidden risks of AI (and how to avoid them)

  • You won't receive lots of emails, and you can unsubscribe at any time.
  • By submitting my details, I consent for Marches Growth Hub to process my data for the purposes described in the Privacy Policy.