Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fit_frame_split - ValueError: Length of values does not match length of index #12

Open
csm-kb opened this issue Mar 14, 2024 · 9 comments · May be fixed by #13
Open

fit_frame_split - ValueError: Length of values does not match length of index #12

csm-kb opened this issue Mar 14, 2024 · 9 comments · May be fixed by #13

Comments

@csm-kb
Copy link

csm-kb commented Mar 14, 2024

Hey! As mentioned in #7, there seem to be edge cases where the labels computed by fit_frame_split() don't match the row count of X fed to it. Not quite sure what's causing it at first glance!

The error in question:

ValueError: Length of values (17465) does not match length of index (17612)

The use in question (it is sorted by timestamp ascending before it goes in):

clustering = ST_DBSCAN(eps1=0.25, eps2=250, min_samples=10).fit_frame_split(sub_df.loc[:, ["timestamp","x","y"]].values, 2000)
sub_df["cluster"] = clustering.labels

Attached is the subset CSV of ordered timestamp/x/y data that yielded this for me. Timestamp is unix_millis, x/y are in an arbitrary space for particle data for a side project.

Currently looking into a temporary rewrite of it for the memory constraints I'm currently fighting with (I turned here because with fit(), some very large (>100k) position datasets that are only a couple hundred MB in Pandas turned out via memory_profile to cause up to a 6.8 GB increment in memory use! which eats heap and crashes smaller workers on my compute cluster, etc... probably the darn matrices becoming not-so-sparse).

ST_DBSCAN_2024_03_14.csv

@eren-ck
Copy link
Owner

eren-ck commented Mar 18, 2024

Hi,
thanks for the tip. I'll have a look at it sometime this week in order to find out what the problem is.

Cheers,
Eren

@csm-kb
Copy link
Author

csm-kb commented Mar 18, 2024

Hi, thanks for the tip. I'll have a look at it sometime this week in order to find out what the problem is.

awesome, thank you! let me know how I can assist 🙇‍♂️

@eren-ck
Copy link
Owner

eren-ck commented Mar 25, 2024

Hi there,

could you maybe upload an example notebook highlighting the error. I was not able to reproduce your error. Please see attached files.
test_fit_frame_split.zip

Cheers,
Eren

@csm-kb
Copy link
Author

csm-kb commented Mar 27, 2024

I was not able to reproduce your error.

You didn't correctly use fit_frame_split, based on how you implemented it! fit_frame_split uses fit internally, right? Your demo shows that you perform fit_frame_split on the data, but then run another fit immediately after, nullifying the fit_frame_split run's labels.

Please run the following notebook mod; note that cell 2 works as expected, but cell 3 does not.
mod_test_fit_frame_split.zip

@csm-kb
Copy link
Author

csm-kb commented Apr 5, 2024

@eren-ck hey! Just wanted to check if there was anything wrong with that notebook or intuition? Let me know!

@eren-ck
Copy link
Owner

eren-ck commented Apr 15, 2024

Hi there,

thanks for your patience. So finally had some time. You are right, it seems that for some cases the fit_frame_split is not working correctly.

For others it does. I have to further investigate this. Meanwhile you can just use a smaller or larger frame size (1000 or 3000). I hope this helps you to resolve the problem. For instance:

def test_fit_split():
    df = pd.read_csv('ST_DBSCAN_2024_03_14.csv')
    
    # transform to numpy array
    data = df.loc[:, ['timestamp','x','y']].values

    st_dbscan = ST_DBSCAN(eps1=0.25, eps2=250, min_samples=10).fit_frame_split(data, 3000)

    df['cluster'] = st_dbscan.labels
    return df

df_fit_split = test_fit_split()

I expected that with the sparse matrices people would no longer have any need to rely on the fit_frame_split method. But yeah, I see your problem. I will fix this over the next few week.

Cheers,
Eren

@csm-kb
Copy link
Author

csm-kb commented Apr 15, 2024

No worries on the delay -- and sounds good, happy to hear I'm not crazy! Will keep my eyes peeled on future follow-up, as it would become the most memory-efficient approach when loads of parallel Spark jobs are being executed at the same time on millions of grouped rows and thousands of groups. Exciting stuff!

@DPS100
Copy link

DPS100 commented Jun 21, 2024

I had the same error when sorting my dataframe before passing it into the fit_frame_split. I believe this is not an issue just with fit_frame_split, but any method that uses pandas.Dataframe.sort_values and subsequently sklearn.utils.check_array on a dataframe will fail unless the indices are reset.

I was not able to reproduce your error.

I am able to reproduce on my own dataset. If your data is not sorted by time and you use the pandas.Dataframe.sort_values method, it will create the error. These following snippets illustrate the error and how to fix it.

sorted = selected_df.sort_values(by='UTC')

X_original_unsorted = selected_df.loc[start_idx:end_idx-1, ['UTC', 'x', 'y', 'z']]
print(f"Unsorted shape: {X_original_unsorted.shape}") # (10000, 4)

X_original = sorted.loc[start_idx:end_idx-1, ['UTC', 'x', 'y', 'z']]
print(f"Sorted shape: {X_original.shape}") # (11309, 4)

X_checked = check_array(X_original)
print(f"Checked shape: {X_checked.shape}") # (11309, 4)

n, m = X_checked.shape

# pdist errors
time_dist = pdist(X_checked[:, 0].reshape(n, 1), metric='euclidean') 
# ValueError: Found input variables with inconsistent numbers of samples: [11309, 10000]
sorted = selected_df.sort_values(by='UTC')

# This fixes
sorted.reset_index(drop=True, inplace=True)

X_original_unsorted = selected_df.loc[start_idx:end_idx-1, ['UTC', 'x', 'y', 'z']]
print(f"Unsorted shape: {X_original_unsorted.shape}") # (10000, 4)

X_original = sorted.loc[start_idx:end_idx-1, ['UTC', 'x', 'y', 'z']]
print(f"Sorted shape: {X_original.shape}") # (10000, 4)

X_checked = check_array(X_original)
print(f"Checked shape: {X_checked.shape}") # (10000, 4)

n, m = X_checked.shape

# No error
time_dist = pdist(X_checked[:, 0].reshape(n, 1), metric='euclidean') 

This error is probably still following intended behavior of sklearn.utils.check_array, but I am not sure. I will follow up at some point after looking into it. I would encourage that fit_frame_split should sort by time so preventing this behavior is not made the user's issue. Also inform the user that fit will fail if indices as well as timestamps are not in strictly increasing order if making a copy or sorting the dataframe are out of the question. Or possibly throw some exception that indices should also be ordered as well as time.

EDIT: I now realize there is a mistake when using the .loc operator on indexes that are not in order, and that this issue does not occur in the same place as the original reporter's code. My issue occurs because I pass a pandas.Dataframe instead of a 2d numpy array.

@DPS100
Copy link

DPS100 commented Jul 9, 2024

I started encountering this issue even without my issue of passing a dataframe with wrong indexes. When outputting the size of the processed frame and the overlap with @csm-kb's demo file, I made this discovery:

2000
-249 # 1751
+2000 # 3751
-249 # 3502
+2000 # 5502
-249 # 5253
+2000 # 7253
-249 # 7004
+2000 # 9004
-249 # 8755
+2000 # 10755
-249 # 10506
+2000 # 12506
-249 # 12257
+2000 # 14257
-249 # 14008
+2000 # 16008
-249 # 15759
+1853 # 17612 # Should stop here, but doesn't
-249 # 17363
+102 # 17465 # Now some values are erased

Outputting the modified input reveals the same issue:
image
There is a large discontinuity of skipped rows on line 17363. I believe this is an issue with the bounds of the for loop or the way the indexer works on lines 190 or 191 of st_dbscan.py. I was able to fix this by aborting the for loop in fit_frame_split() early, although this is probably not the desired fix. I have opened a pull request with this patch.

@DPS100 DPS100 linked a pull request Jul 9, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants