In this course you will cover the fundamentals of Apache Spark, and look at the architecture of Spark, why the design choices of RDD were made and how this enhances the Hadoop and MapReduce construct. You will also learn how Spark works and understand the various transformations and actions in Spark. This course is part of the Artificial Intelligence Infrastructure learning path - complete this path to learn how to build AI systems for enterprise.
To complete the Artificial Intelligence Infrastructure learning path you should have intermediate knowledge of Python and Google Collab. You should also understand data preparation work as well as having a basic knowledge of common ML and AI algorithms and different databases such as MySQL, MongoDB, and Cassandra, and data formats for large collections of data.