Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Saves the content of the DataFrame in ORC format at the specified path.
Syntax
orc(path, mode=None, partitionBy=None, compression=None)
Parameters
| Parameter | Type | Description |
|---|---|---|
path |
str | The path in any Hadoop-supported file system. |
mode |
str, optional | The behavior when data already exists. Accepted values are 'append', 'overwrite', 'ignore', and 'error' or 'errorifexists' (default). |
partitionBy |
str or list, optional | Names of partitioning columns. |
compression |
str, optional | The compression codec to use. |
Returns
None
Examples
Write a DataFrame into an ORC file and read it back.
import tempfile
with tempfile.TemporaryDirectory(prefix="orc") as d:
spark.createDataFrame(
[{"age": 100, "name": "Alice"}]
).write.orc(d, mode="overwrite")
spark.read.format("orc").load(d).show()
# +---+------------+
# |age| name|
# +---+------------+
# |100|Alice|
# +---+------------+