Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The column created from scan_parquet with the include_file_paths argument misbehaves when selected before collection #20788

Open
2 tasks done
lmmx opened this issue Jan 18, 2025 · 1 comment
Assignees
Labels
accepted Ready for implementation bug Something isn't working P-medium Priority: medium python Related to Python Polars

Comments

@lmmx
Copy link

lmmx commented Jan 18, 2025

Checks

  • I have checked that this issue has not already been reported.
  • I have confirmed this bug exists on the latest version of Polars.

Reproducible example

url = "hf://datasets/permutans/wdc-common-crawl-embedded-jsonld/2021-12/train-02800-of-06240.parquet"
df = pl.scan_parquet(url, n_rows=1, include_file_paths="via").select("via").head(1).collect()

You’d expect such a redundant use of n_rows=1 and .head(1) to guarantee you .collect() a DataFrame with just one row but you’d be wrong! :-)

shape: (6_070_269, 1)
┌─────────────────────────────────┐
│ via                             │
│ ---                             │
│ str                             │
╞═════════════════════════════════╡
│ https://huggingface.co/dataset… │
│ https://huggingface.co/dataset… │
│ https://huggingface.co/dataset… │
│ https://huggingface.co/dataset… │
│ https://huggingface.co/dataset… │
│ …                               │
│ https://huggingface.co/dataset… │
│ https://huggingface.co/dataset… │
│ https://huggingface.co/dataset… │
│ https://huggingface.co/dataset… │
│ https://huggingface.co/dataset… │
└─────────────────────────────────┘
Took 0.90s

Unfortunately this column seems to be misbehaving and cannot be limited before collection time

Log output

(Apologies, I’m using Google Colab and it doesn’t seem to respect this env var)

Issue description

The column created by scan_parquet when passed the include_file_paths argument does not get limited by the values of n_rows or a chained .head() method call if it is the only collected column.

This shows up unexpectedly in more convoluted examples where you might transform the selected file path on its own like:

url_dir = "hf://datasets/permutans/wdc-common-crawl-embedded-jsonld/2021-12/"
df = pl.scan_parquet(url_dir, n_rows=1, include_file_paths="via").select(
    pl.col("via").str.extract(r"\/[^/]+?\/(?P<file>[^/]+?)$")
).collect()
print(df["via"])

What should happen in this case is that:

  • you scan the directory of parquet files
  • you take the first file in the directory listing because of the n_rows=1
  • you extract its file name from the URL with the regex
  • you select just that file name column
  • you collect

What you end up with is not a single row single column dataframe but a dataframe with the number of rows of the first file in that directory.

I’m also not super familiar with Google Colab’s resource limits but when I run .unique() on this single column (which I expect to be identical per file) on just 2 files it runs out of RAM and crashes even with low_memory_usage set to True, something seems off here.

df = pl.scan_parquet(urls[:2], low_memory=True, include_file_paths="via").select(
    "via"
).unique().collect()

Your session crashed after using all available RAM.

Expected behavior

I would expect head(1) to return 1 row regardless of whether before/after lazyframe collection

Installed versions

--------Version info---------
Polars:              1.20.0
Index type:          UInt32
Platform:            Linux-6.1.85+-x86_64-with-glibc2.35
Python:              3.11.11 (main, Dec  4 2024, 08:55:07) [GCC 11.4.0]
LTS CPU:             False

----Optional dependencies----
Azure CLI            <not installed>
adbc_driver_manager  <not installed>
altair               5.5.0
azure.identity       <not installed>
boto3                <not installed>
cloudpickle          3.1.0
connectorx           <not installed>
deltalake            <not installed>
fastexcel            <not installed>
fsspec               2024.10.0
gevent               <not installed>
google.auth          2.27.0
great_tables         <not installed>
matplotlib           3.10.0
nest_asyncio         1.6.0
numpy                1.26.4
openpyxl             3.1.5
pandas               2.2.2
pyarrow              17.0.0
pydantic             2.10.5
pyiceberg            <not installed>
sqlalchemy           2.0.37
torch                2.5.1+cu121
xlsx2csv             <not installed>
xlsxwriter           <not installed>
@lmmx lmmx added bug Something isn't working needs triage Awaiting prioritization by a maintainer python Related to Python Polars labels Jan 18, 2025
@nameexhaustion nameexhaustion self-assigned this Jan 19, 2025
@nameexhaustion nameexhaustion added accepted Ready for implementation P-medium Priority: medium and removed needs triage Awaiting prioritization by a maintainer labels Jan 19, 2025
@github-project-automation github-project-automation bot moved this to Ready in Backlog Jan 19, 2025
@coastalwhite
Copy link
Collaborator

Just for reference, that POLARS_NEW_MULTIFILE=1 also produces the same wrong result. I think it should be fixed there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted Ready for implementation bug Something isn't working P-medium Priority: medium python Related to Python Polars
Projects
Status: Ready
Development

No branches or pull requests

3 participants