Data Engineering with Databricks

This is an introductory course that serves as an appropriate entry point to learn Data Engineering with Databricks. Below, we describe each of the four, four-hour modules included in this course.

Data Ingestion with Delta Lake
This course is designed for Data Engineers to deepen their understanding of Delta Lake to handle data ingestion, transformation, and management with ease. Using the latest features of Delta Lake, learners will explore real-world applications to enhance data workflows, optimize performance, and ensure data reliability.

Deploy Workloads with Databricks Workflows

This course is designed for data engineer professionals who are looking to leverage Databricks for streamlined and efficient data workflows. By the end of this course, you’ll be well-versed in using Databricks' Jobs and Workflows functionalities to automate, manage, and monitor complex data pipelines. The course includes hands-on labs and best practices to ensure a deep understanding and practical ability to manage workflows in production environments.

Build Data Pipelines with Delta Live Tables

This comprehensive course is designed to understand the Medallion Architecture using Delta Live Tables. Participants will learn how to create robust and efficient data pipelines for structured and unstructured data, understand the nuances of managing data quality, and unlock the potential of Delta Live Tables. By the end of this course, participants will have hands-on experience building pipelines, troubleshooting issues, and monitoring their data flows within the Delta Live Tables environment.

Data Management and Governance with Unity Catalog

In this course, you'll learn about data management and governance using Databricks Unity Catalog. It covers foundational concepts of data governance, complexities in managing data lakes, Unity Catalog's architecture, security, administration, and advanced topics like fine-grained access control, data segregation, and privilege management.

Prerequisites

  • Beginner familiarity with basic cloud concepts (virtual machines, object storage, identity management)
  • Ability to perform basic code development tasks (create compute, run code in notebooks, use basic notebook operations, import repos from git, etc.)
  • Intermediate familiarity with basic SQL concepts (CREATE, SELECT, INSERT, UPDATE, DELETE, WHILE, GROUP BY, JOIN, etc.)
  • Intermediate experience with basic SQL concepts such as SQL commands, aggregate functions, filters and sorting, indexes, tables, and views.
  • Basic knowledge of Python programming, jupyter notebook interface, and PySpark fundamentals.

Outline

Data Ingestion with Delta Lake
Delta Lake and Data Objects
Set Up and Load Delta Tables
Basic Transformations
Load Data Lab
Cleaning Data
Complex Transformations
SQL UDFs
Advanced Delta Lake Features
Manipulate Delta Tables Lab

 

Deploy Workloads with Databricks Workflows
Introduction to Workflows
Jobs Compute
Scheduling Tasks with the Jobs UI
Workflows Lab
Jobs Features
Explore Scheduling Options
Conditional Tasks and Repairing Runs
Modular Orchestration
Databricks Workflows Best Practices

 

Build Data Pipelines with Delta Live Tables
The Medallion Architecture
Introduction to Delta Live Tables
Using the Delta Live Tables UI
SQL Pipelines
Python Pipelines
Delta Live Tables Running Modes
Pipeline Results
Pipeline Event Logs
Optional - Land New Data

Data Management and Governance with Unity Catalog
Data Governance Overview
Demo: Populating the Metastore
Lab: Navigating the Metastore
Organization and Access Patterns
Demo: Upgrading Tables to Unity Catalog
Security and Administration in Unity Catalog
Databricks Marketplace Overview
Privileges in Unity Catalog
Demo: Controlling Access to Data
Fine-Grained Access Control
Lab: Migrating and Managing Data in Unity Catalog