Code responsibly with generative AI in C++
Embark on a comprehensive exploration of cybersecurity and secure coding practices in this intensive three-day course. It is primarily focusing on C++, but also integrates some C concepts. Based on a primer on machine code, assembly, and memory overlay (Intel and ARM versions available), the curriculum addresses critical security issues related to memory management. Various protection techniques on the level of source code, compiler, OS or hardware are discussed – such as stack smashing protection, ASLR or the non-execution bit – to understand how they work and make clear what we can and what we can’t expect from them.
The various secure coding subjects are aligned to common software security weakness categories, such as security features, error handling or code quality. Many of the weaknesses are, however, linked to missing or improper input validation. In this category you’ll learn about injection, the surprising world of integer overflows, and about handling file names correctly to avoid path traversal.
Through hands-on labs and real-world case studies, you will navigate the details of secure coding practices to get essential approaches and skills in cybersecurity.
So that you are prepared for the forces of the dark side.
So that nothing unexpected happens.
Nothing.
Audience & Prerequisits:
- C/C++ developers
- General C++ and C development
Standards and references:
- SEI CERT, CWE and Fortify Taxonomy
- 29 Labs and 8 Case Studies
What you will learn:
- Getting familiar with essential cyber security concepts
- Correctly implementing various security features
- Identify vulnerabilities and their consequences
- Learn the security best practices in C++
- Managing vulnerabilities in third party components
- Input validation approaches and principles
Outline
- Cyber security basics
- Memory management vulnerabilities
- Memory management hardening
- Common software security weaknesses
- Using vulnerable components
- Wrap up
Note:
This variant of the course deals extensively with how certain security problems in code are handled by GitHub Copilot.
Through a number of hands-on labs participants will get first hand experience about how to use Copilot responsibly, and how to prompt it to generate the most secure code. In some cases it is trivial, but in most of the cases it is not; and in yet some other cases it is basically impossible.
At the same time, the labs provide general experience with using Copilot in everyday coding practice - what you can expect from it, and what are those areas where you shouldn't rely on it.
About The instructor Kiss Balazs
Balázs started in software security two decades ago as a researcher in various EU projects (FP6, FP7, H2020) while also taking part in over 25 commercial security evaluations: threat modeling, design review, manual testing, fuzzing. While breaking things was admittedly more fun, he's now on the other side, helping developers stop attacks at the (literal) source.
To date, he has held over 100 secure coding training courses all over the world about typical code vulnerabilities, protection techniques, and best practices.
His most recent passion is the (ab)use of AI systems, the security of machine learning, and the effect of generative AI on code security.