Gratis Webinar: Responsible AI in software development - How to use Copilot and other GenAI tools responsibly

Generative AI (GenAI) and large language models (LLMs) are revolutionizing software development by automating coding tasks from debugging to workflow optimization. This ongoing revolution is shifting practices from traditional model-driven development to AI-powered tools like GitHub Copilot or Codeium. While these tools promise efficiency, understanding the strengths and limitations of GenAI is essential to make sure it is used effectively and responsibly.

Responsible AI - among others - addresses ethical, legal and transparency concerns in code generation. Explainability aims to clarify the black box nature of these models. There are also various robustness challenges as well, like the threat of prompt injection, generating vulnerable code, or dependency hallucinations - just to name a few.
Developers should integrate GenAI responsibly into software development by addressing these risks, adopting rigorous assessment, and promoting secure coding practices.


In this webinar you will learn:

  • The importance of generative AI in software development
  • Responsible AI and how it applies to AI-driven code generation tools
  • The strengths and weaknesses of Copilot and other AI code generation tools
  • How Cydrill courses can guide you in coding responsibly with GenAI


Nyttig informasjon:
📅 Dato: 13. desember 2025
🕛 Klokken: 12.00-13.00
💸 Pris: GRATIS
📢 Språk: Engelsk

Er du klar for å bli med?

Gå til påmelding


Outline

Responsible AI in software development

The rise of GenAI in software development

  • Automated programming: from MDD to LLMs
  • The 'AI coding revolution' of the 2020s
  • What is GenAI (not) good for?
  • Responsible AI in the software industry
  • What is responsible AI?
  • XAI: what's happening in the code-generating black box?
  • Security and safety: how strong is that black box?
  • The dark side of AI code generation

Robustness of the generated code

  • Security of AI generated code
  • Practical attacks against code generation tools
  • Dependency hallucination via generative AI
  • Case study – A history of GitHub Copilot weaknesses (up to mid 2024)
  • Demo – GitHub Copilot code security

Learning responsible coding with GenAI


Presenter: Erno Jeges / Balazs Kiss

Erno has been a software developer for 40 years, half of which he has spent writing, and half breaking code. In the last ten years he has been focused on teaching developers how not to code. More than 100 classes in 30 countries add to his track record all around the world.
Balazs started in software security 15 years ago as a researcher and taking part in over 25 commercial security evaluations. To date, he has held over 100 secure coding training courses all over the world about typical code vulnerabilities, protection techniques, and best practices. His most recent passion is the (ab)use of AI systems, the security of machine learning, and the effect of generative AI on code security.

 

Er du klar for å bli med?

Gå til påmelding