Share via


parquet (DataStreamReader)

Loads a Parquet file stream and returns the result as a DataFrame.

Syntax

parquet(path, **options)

Parameters

Parameter Type Description
path str Path in any Hadoop-supported file system.

Returns

DataFrame

Examples

Load a stream from a temporary Parquet file:

import tempfile
import time
with tempfile.TemporaryDirectory(prefix="parquet") as d:
    spark.range(10).write.mode("overwrite").format("parquet").save(d)
    q = spark.readStream.schema(
        "id LONG").parquet(d).writeStream.format("console").start()
    time.sleep(3)
    q.stop()