
Apache Spark™ - Unified Engine for large-scale data analytics
Apache Spark is a multi-language engine for executing data engineering, data science, and machine learning on single-node machines or clusters.
Downloads - Apache Spark
Spark docker images are available from Dockerhub under the accounts of both The Apache Software Foundation and Official Images. Note that, these images contain non-ASF software and may be …
Quick Start - Spark 4.1.0 Documentation
To follow along with this guide, first, download a packaged release of Spark from the Spark website. Since we won’t be using HDFS, you can download a package for any version of Hadoop.
PySpark Overview — PySpark 4.1.0 documentation - Apache Spark
Dec 11, 2025 · PySpark combines Python’s learnability and ease of use with the power of Apache Spark to enable processing and analysis of data at any size for everyone familiar with Python. PySpark …
Getting Started — PySpark 4.1.0 documentation - Apache Spark
There are more guides shared with other languages such as Quick Start in Programming Guides at the Spark documentation. There are live notebooks where you can try PySpark out without any other step:
Spark 3.5.5 released - Apache Spark
Spark 3.5.5 released We are happy to announce the availability of Spark 3.5.5! Visit the release notes to read about the new features, or download the release today. Spark News Archive
Structured Streaming Programming Guide - Spark 4.1.0 Documentation
Types of time windows Spark supports three types of time windows: tumbling (fixed), sliding and session. Tumbling windows are a series of fixed-sized, non-overlapping and contiguous time …
Spark Streaming - Spark 4.1.0 Documentation
Spark Streaming is an extension of the core Spark API that enables scalable, high-throughput, fault-tolerant stream processing of live data streams. Data can be ingested from many sources like Kafka, …
Performance Tuning - Spark 4.1.0 Documentation
Apache Spark’s ability to choose the best execution plan among many possible options is determined in part by its estimates of how many rows will be output by every node in the execution plan (read, filter, …
SparkR (R on Spark) - Spark 4.1.0 Documentation
To use Arrow when executing these, users need to set the Spark configuration ‘spark.sql.execution.arrow.sparkr.enabled’ to ‘true’ first. This is disabled by default.