You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Seems like maybe pandas/pytables append is a lot slower than writing into a new file. (Or else the rewriting-when-strings-are-longer code is hitting a lot.)
The sort step should probably pre-count lines per PUMA in stats, and maybe max string lengths for the things that need that. Then we can preallocate file sizes and write into them, instead of appending.
Probably should (also?) consider using feather or parquet instead of hdf5.
The text was updated successfully, but these errors were encountered:
feather is way faster to load but also bigger on disk
parquet is slightly smaller than hdf5 and way faster to load
So parquet seems like the way to go. Unfortunately, doesn't seem to really be very appendable (since it's columnar). Could write it in chunks and then do a (probably quick) rewrite at the end. Or look into dask for everything (#23).
For now, manually converting hdf5 => parquet post-sorting and letting featurize support either.
With the new two-pass scheme with the merge at the end, the state merger is fast, but puma merger is quite slow. Not sure whether this is due to casting categorical dtypes or just i/o.
Could merge in a separate thread as we go? Or again, maybe dask #23 solves this better.
Seems like maybe pandas/pytables append is a lot slower than writing into a new file. (Or else the rewriting-when-strings-are-longer code is hitting a lot.)
The sort step should probably pre-count lines per PUMA in
stats
, and maybe max string lengths for the things that need that. Then we can preallocate file sizes and write into them, instead of appending.Probably should (also?) consider using feather or parquet instead of hdf5.
The text was updated successfully, but these errors were encountered: