Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to load model : We couldn't connect to 'https://huggingface.co' to load this file #128

Open
gregbugaj opened this issue Nov 29, 2024 · 0 comments

Comments

@gregbugaj
Copy link
Collaborator

We get following error at certain times about not being able to load model from HugginFaces.
This could be due to rate limiting or service not being available.

Real cause :

   requests.exceptions.HTTPError: 429 Client Error: Too Many Requests for url:                                                                                                  
   https://huggingface.co/jinaai/jina-bert-implementation/resolve/main/configuration_bert.py 
ERROR  extract_t/rep-2@37 OSError("We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like                        
       jinaai/jina-bert-implementation is not the path to a directory containing a file named configuration_bert.py.\nCheckout your internet connection or see                      
       how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.") during 'WorkerRuntime' initialization                      
        add "--quiet-error" to suppress the exception details    

We should follow the instruction and implement offline-mode to allow for faster bootstrapping of the application.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant