-
Am getting below error while trying to run ingest.py. note that i have downloaded the model and loaded it using below code i get the below error while trying to execute the ingest.py from sentence_transformers import SentenceTransformer runfile('C:/Users//Ritish/DataScience Project Folder/chatbot/llama_chatbot/bloke/localGPT/ingest.py', wdir='C:/Users//Ritish/DataScience Project Folder/chatbot/llama_chatbot/bloke/localGPT') File ~\dev\dbconda-env-pradrit\chatbot\lib\site-packages\spyder_kernels\py3compat.py:356 in compat_exec File c:\users\ritish\datascience project folder\chatbot\llama_chatbot\bloke\localgpt\ingest.py:161 in File ~\dev\dbconda-env-pradrit\chatbot\lib\site-packages\click\core.py:1130 in call File ~\dev\dbconda-env-pradrit\chatbot\lib\site-packages\click\core.py:1055 in main File ~\dev\dbconda-env-pradrit\chatbot\lib\site-packages\click\core.py:1404 in invoke File ~\dev\dbconda-env-pradrit\chatbot\lib\site-packages\click\core.py:760 in invoke File c:\users\ritish\datascience project folder\chatbot\llama_chatbot\bloke\localgpt\ingest.py:147 in main File ~\dev\dbconda-env-pradrit\chatbot\lib\site-packages\langchain\vectorstores\chroma.py:446 in from_documents File ~\dev\dbconda-env-pradrit\chatbot\lib\site-packages\langchain\vectorstores\chroma.py:407 in from_texts File ~\dev\dbconda-env-pradrit\chatbot\lib\site-packages\langchain\vectorstores\chroma.py:95 in init File ~\dev\dbconda-env-pradrit\chatbot\lib\site-packages\torch\nn\modules\module.py:1614 in getattr AttributeError: 'SentenceTransformer' object has no attribute 'embed_documents' |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 3 replies
-
fixed it by give the offline model path directly in loading offline instruction model-
|
Beta Was this translation helpful? Give feedback.
-
I encountered a similar problem while utilizing a sentence-transformers. For those who are attempting the same and would like to download the model once for subsequent use, here's a suggestion: Consider providing the path to your model dir as the cache_folder argument insted of editing the library code as it will cause issues on update. Example: By following this approach, the model will be downloaded to the specified directory during the initial attempt, and then you can conveniently use it from that location onwards. |
Beta Was this translation helpful? Give feedback.
fixed it by give the offline model path directly in
loading offline instruction model-
change below code in Lib\site-packages\langchain\embeddings\huggingface.py file