Skip to main content

When is your test good enough? Building a Robust Testing Framework for AI.

As organizations integrate advanced AI systems into their business, ensuring their safety, reliability, and robustness becomes a critical priority. However, AI testing remains a complex and evolving field. Key questions from an organizational perspective are: What expertise is required to ensure a secure application? Who is responsible for testing? When do you test? How much testing is enough? How much will it cost?

With this workshop, we want to explore and discuss common challenges within the domain of testing AI applications. We want your input for our initiative to develop cost effective AI safety testing programs. For a warmup, a hands-on demo exercise lets us experiment jailbreaking a LLM and assessing the effectiveness of these jailbreaks (no technical expertise required). Next, we explore in groups common challenges, approaches, and structures that help us address this topic from an organizational perspective. We reconvene to review the practical question around AI safety testing and identify what is needed to help businesses plan and run AI tests. 

Who will get the most out of this: 
CISOs and AI safety specialists
DPOs, Compliance and Risk Management Leaders
AI Product Managers, AI Product Designers or leaders of AI projects

What to bring:
A laptop or willingness to work in groups.
Your expertise to share ideas.

Sign-up:
The workshop will be limited to 25 people. Please reserve your spot with this link:

  • 00

    days

  • 00

    hours

  • 00

    minutes

  • 00

    seconds

Date

Apr 14 2025

Time

08:00 - 12:00

Location

HWZ Hochschule für Wirtschaft Zürich
Lagerstrasse 5, 8004 Zürich
QR Code

Please enable JavaScript in your browser to complete this form.
Subscribe to our newsletter:
=