Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUG: read_parquet times out resolving S3 region #57449

Closed
2 of 3 tasks
hutch3232 opened this issue Feb 16, 2024 · 5 comments
Closed
2 of 3 tasks

BUG: read_parquet times out resolving S3 region #57449

hutch3232 opened this issue Feb 16, 2024 · 5 comments
Labels
Bug Closing Candidate May be closeable, needs more eyeballs IO Network Local or Cloud (AWS, GCS, etc.) IO Issues IO Parquet parquet, feather

Comments

@hutch3232
Copy link
Contributor

Pandas version checks

  • I have checked that this issue has not already been reported.

  • I have confirmed this bug exists on the latest version of pandas.

  • I have confirmed this bug exists on the main branch of pandas.

Reproducible Example

import os
import pandas as pd
import s3fs

os.environ["AWS_PROFILE"] = "my-bucket-rw"

# works perfectly
df = pd.read_csv("s3://my-bucket/example.csv")

# throws error seemingly about the region
df = pd.read_parquet("s3://my-bucket/example_parquet")
# ---------------------------------------------------------------------------
# OSError                                   Traceback (most recent call last)
# Cell In[7], line 1
# ----> 1 df = pd.read_parquet("s3://my-bucket/example_parquet")

# File /opt/conda/lib/python3.9/site-packages/pandas/io/parquet.py:667, in read_parquet(path, engine, columns, storage_options, use_nullable_dtypes, dtype_backend, filesystem, filters, **kwargs)
#     664     use_nullable_dtypes = False
#     665 check_dtype_backend(dtype_backend)
# --> 667 return impl.read(
#     668     path,
#     669     columns=columns,
#     670     filters=filters,
#     671     storage_options=storage_options,
#     672     use_nullable_dtypes=use_nullable_dtypes,
#     673     dtype_backend=dtype_backend,
#     674     filesystem=filesystem,
#     675     **kwargs,
#     676 )

# File /opt/conda/lib/python3.9/site-packages/pandas/io/parquet.py:267, in PyArrowImpl.read(self, path, columns, filters, use_nullable_dtypes, dtype_backend, storage_options, filesystem, **kwargs)
#     264 if manager == "array":
#     265     to_pandas_kwargs["split_blocks"] = True  # type: ignore[assignment]
# --> 267 path_or_handle, handles, filesystem = _get_path_or_handle(
#     268     path,
#     269     filesystem,
#     270     storage_options=storage_options,
#     271     mode="rb",
#     272 )
#     273 try:
#     274     pa_table = self.api.parquet.read_table(
#     275         path_or_handle,
#     276         columns=columns,
#    (...)
#     279         **kwargs,
#     280     )

# File /opt/conda/lib/python3.9/site-packages/pandas/io/parquet.py:117, in _get_path_or_handle(path, fs, storage_options, mode, is_dir)
#     114 pa_fs = import_optional_dependency("pyarrow.fs")
#     116 try:
# --> 117     fs, path_or_handle = pa_fs.FileSystem.from_uri(path)
#     118 except (TypeError, pa.ArrowInvalid):
#     119     pass

# File /opt/conda/lib/python3.9/site-packages/pyarrow/_fs.pyx:471, in pyarrow._fs.FileSystem.from_uri()

# File /opt/conda/lib/python3.9/site-packages/pyarrow/error.pxi:154, in pyarrow.lib.pyarrow_internal_check_status()

# File /opt/conda/lib/python3.9/site-packages/pyarrow/error.pxi:91, in pyarrow.lib.check_status()

# OSError: When resolving region for bucket 'my-bucket': AWS Error NETWORK_CONNECTION during HeadBucket operation: curlCode: 28, Timeout was reached

# the following two approaches work
df = pd.read_parquet("s3://my-bucket/example_parquet",
                     storage_options = {"client_kwargs": {
                         "verify": os.getenv("AWS_CA_BUNDLE")
                     }})

s3 = s3fs.S3FileSystem()
df = pd.read_parquet(file, filesystem = s3)

Issue Description

I was not sure where to report this issue because it probably involves pandas, pyarrow, s3fs and fsspec.

FWIW I'm using an S3 Compatible Storage called Scality.

As the example code shows, I'm able to use read_csv with the only configuration being the AWS_PROFILE environmental variable. Trying the analogous read_parquet it times out and seems as if it can't resolve the region. My S3-compatible instance is on-premises so there isn't a "region", however setting AWS_DEFAULT_REGION to "" also did not solve this.

Interestingly, supplying either of AWS_CA_BUNDLE to storage_options or supplying the filesystem argument fixes this. Worth noting that in my ~/.aws/config file the region and ca_bundle values are provided (as is endpoint_url).

Expected Behavior

I think read_parquet should behave like read_csv and be able to access and read the data without passing in additional configuration.

Installed Versions

INSTALLED VERSIONS

commit : fd3f571
python : 3.9.16.final.0
python-bits : 64
OS : Linux
OS-release : 4.18.0-513.9.1.el8_9.x86_64
Version : #1 SMP Thu Nov 16 10:29:04 EST 2023
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : en_US.UTF-8
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8

pandas : 2.2.0
numpy : 1.26.4
pytz : 2024.1
dateutil : 2.8.2
setuptools : 69.1.0
pip : 24.0
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.1.3
IPython : 8.18.1
pandas_datareader : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2024.2.0
gcsfs : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 15.0.0
pyreadstat : None
python-calamine : None
pyxlsb : None
s3fs : 2024.2.0
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
zstandard : 0.19.0
tzdata : 2024.1
qtpy : None
pyqt5 : None

@hutch3232 hutch3232 added Bug Needs Triage Issue that has not been reviewed by a pandas team member labels Feb 16, 2024
@rhshadrach rhshadrach added IO Network Local or Cloud (AWS, GCS, etc.) IO Issues IO Parquet parquet, feather labels Feb 16, 2024
@rhshadrach
Copy link
Member

rhshadrach commented Feb 16, 2024

Thanks for the report. Grepping the pandas code, I find no instances of AWS_PROFILE. I don't think pandas is doing anything special in read_csv, so it seems likely this is not a pandas issue.

@bileamScheuvens
Copy link

# works perfectly
df = pd.read_csv("s3://my-bucket/example.csv")

# throws error seemingly about the region
df = pd.read_parquet("s3://my-bucket/example_parquet")

Just to make sure; your read_csv includes a file extension, the read_parquet doesn't.
Is that intentional?

@hutch3232
Copy link
Contributor Author

Appreciate you both taking a look at this.

I agree, it's odd I didn't include an extension. I did that because generally I'm working with multipart parquets from Spark which are stored in a folder.

That said, I was doing this testing with a single parquet file, and I just repeated it now with a proper extension and the same result occurs. Thanks for the suggestion though, it was worth trying.

@rhshadrach rhshadrach added Closing Candidate May be closeable, needs more eyeballs and removed Needs Triage Issue that has not been reviewed by a pandas team member labels Feb 25, 2024
@djpixles
Copy link

Dj

@mroeschke
Copy link
Member

Agreed this doesn't appear to be a pandas issue, additionally it appears this is being raised in pyarrow so I would suggest seeing if you are getting the same error with using pyarrow's read_parquet itself so closing

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug Closing Candidate May be closeable, needs more eyeballs IO Network Local or Cloud (AWS, GCS, etc.) IO Issues IO Parquet parquet, feather
Projects
None yet
Development

No branches or pull requests

5 participants