Generative AI Safety & Robustness
Beginner’s Masterclass with PRISM Eval
Date: Thursday, December 12th, 2024.
Time: 7:30pm – 10:30pm CET.
Where: Linklaters, 25 rue de Marignan, 75008 Paris
Who: Harvard Alumni and guests (one guest only).
Cost: €20 HCF members with sign-up priority (join our club); €35 alumni / guests.
Only 40 spots! Please sign up early as the event might fill in quickly.
Light food and drinks will be provided.
As part of our Get ready for the Future Series, join us for our Master Class on “Generative AI Safety and Robustness“, a mini-series led by PRISM Eval Co-founders Pierre Peigné (Chief Scientist) and Nicolas Miailhe (CEO).
As we delve into the rapidly evolving field of generative artificial intelligence powered by large language models, it is crucial to address the pressing issues of safety and robustness. Generative AI, with its ability to create realistic and contextually relevant content, holds immense potential for radically transforming industries ranging from healthcare and finance to entertainment and education.
However, this power comes with significant challenges. Ensuring the safety of generative AI models involves mitigating risks such as misinformation, bias, cybersecurity and unintended consequences. Robustness, on the other hand, requires these models to perform reliably under various conditions and adversarial attacks across a very large set of use-cases.
As a complement to the examples provided during the Google Masterclass, in this session, we will explore the key questions: How can we ensure that generative AI models are safe and robust? What are the emerging best practices for stress-testing and validating these models? And how can we build resilient systems that can withstand real-world challenges? Join us as we navigate these critical topics and discuss the future of generative AI.*
*Introduction generated by Mistral’s Le Chat generative tool.
Class highlights include:
- GenAI Safety key concepts, challenges and dynamic
- Prompt Engineering fundamentals and practical suggestions
- State-of-the-art Demonstration of GenAI chatbots jailbreaking techniques (mini-workshop)
- How to assess the safety and robustness of an LLM and a GenAI Application (chatbot, AI assistant, Knowledge Management, etc.)
- Providing further resources and reading for safe AI development and deployment
Pierre Peigné
Co-founder & Chief Scientist, PRISM Eval
Since January 2024, Pierre Peigné has been the co-founder and scientific director of PRISM Eval.
Pierre began studying AI Safety in January 2022, while working as a freelance Machine Learning Engineer. He participated in research programs organized by Pivotal Research (formerly CHERI) from July to August 2022, and MATS 3.0 from November 2022 to June 2023 under the supervision of Lee Sharkey (Apollo Research).
His research focused on problems of “implicit search” (mesa-optimization) internalized in neural network computations, mechanistic interpretability (exploration of initialization methods to speed up the extraction of superposed features in dense neural networks by overcomplete dictionary learning), and security of individual generative agent systems and in multi-agent environments. At the end of the MATS program, he decided to team up with Quentin to develop the GenAI Ethology research agenda.
Pierre holds a Master’s degree in Logic, Philosophy and History of Science (LoPHiSc) from Paris Sorbonne University (Paris-IV) and Master’s degree in Computer Programming from Ecole 42, where he chaired the Artificial Intelligence association, 42ai.
When he’s not reading new scientific publications in the library, Pierre devotes his free time to climbing, hiking and dancing rock with his wife, as well as playing board games.
Nicolas Miailhe
Co-founder & CEO of PRISM Eval
Since January 2024, Nicolas (Nico) Miailhe has been a co-founder and the CEO of PRISM Eval.
Until December 2023, Nico was President and Chief Executive Officer of The Future Society (TFS), which he originally co-founded in 2014 at the Harvard Kennedy School of Government. Nico is acknowledged as a pioneer in the governance of AI and emerging technologies, a field he has been actively shaping since 2011. His contributions over the past decade have been instrumental in establishing foundational governance frameworks and regulatory mechanisms for AI in the European Union, and the United States, and at a global scale. Since 2019, Nico has been a driving force behind The Future Society’s Athens Roundtable the leading international multi-stakeholder forum on AI and the Rule of Law (6th edition scheduled on Dec. 9, at the OECD in partnership with the AI Action Summit; https://www.aiathens.org/dialogue/sixth-edition). As a strategic thought leader, Nico has advised governments, international organizations, philanthropies, and multinational corporations globally. He serves as an appointed expert to the Global Partnership on AI (GPAI), where he co-chairs the Committee on Climate Action & Biodiversity Preservation, and is an invited expert to the OECD’s AI Group of experts (ONE AI) and UNESCO’s High Level Expert Group on AI Ethics, among others. Nico has held teaching positions in AI Governance at the Paris School of International Affairs (Sciences Po) and the IE School of Global & Public Affairs in Madrid, and is a Future World Fellow at the Center for the Governance of Change at IE Business School.
An Arthur Sachs Scholar, he holds a Master in Public Administration from Harvard Kennedy School, a Master in Defense, Geostrategy & Industrial Dynamics from Panthéon-Assas University, and a Bachelor of Arts in European Affairs and International Relations from Sciences Po Strasbourg.
Date: Thursday, December 12th, 2024.
Time: 7:30pm – 10:30pm CET.
Where: Linklaters, 25 rue de Marignan, 75008 Paris
Who: Harvard Alumni and guests (one guest only).
Cost: €20 HCF members with sign-up priority (join our club) ; €35 alumni / guests.
Only 40 spots! Please sign up early as the event might fill in quickly.
Light food and drinks will be provided.
Billetterie Weezevent