In this 1-day course you will learn how to harness the power of Apache Spark and powerful clusters running on the Azure Databricks platform to run data analytics workloads in a data lakehouse.
You should be an Azure data engineer with subject matter expertise in designing, implementing, managing, and maintaining Microsoft Azure Virtual Desktop experiences and remote apps for any device.
Module 1: Explore Azure Databricks
Provision an Azure Databricks workspace.
Identify core workloads and personas for Azure Databricks.
Describe key concepts of an Azure Databricks solution.
Module 2: Use Apache Spark in Azure Databricks
Describe key elements of the Apache Spark architecture.
Create and configure a Spark cluster.
Describe use cases for Spark.
Use Spark to process and analyze data stored in files.
Use Spark to visualize data.
Module 3: Use Delta Lake in Azure Databricks
Describe core features and capabilities of Delta Lake.
Create and use Delta Lake tables in Azure Databricks.
Create Spark catalog tables for Delta Lake data.
Use Delta Lake tables for streaming data.
Module 4: Use SQL Warehouses in Azure Databricks
Create and configure SQL Warehouses in Azure Databricks.
Create databases and tables.
Create queries and dashboards.
Module 5: Run Azure Databricks Notebooks with Azure Data Factory
Describe how Azure Databricks notebooks can be run in a pipeline.
Create an Azure Data Factory linked service for Azure Databricks.
Use a Notebook activity in a pipeline.
Pass parameters to a notebook.