Replies: 1 comment 3 replies
-
You always send to a node, not to a cluster. But fscrawler supports a list of nodes instead of a single one. In case of failure it will remove the node from the list and then use the next one. I don't remember though how it behaves when the disconnection is happening while indexing. Not sure if it retries or just fails. I assume the former... |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have a cluster set up with 5 notes and although not on the same machine, they are in the same network. How do I make sure that if one of the nodes goes down while fscrawler is processing files it will keep going and writing documents to a different node? In the documentation for a local machine...." It will connect to an elasticsearch cluster running on 127.0.0.1, port 9200." My gateway for my network is XXX.168.1.1 and I have the nodes on XXX.168.1.196 to XXX.168.1.201 and my cluster is named "es_dockets" I currently have the URL "XXX.192.168.199:9200". What do I put in the URL field of my fscrawler.yaml file so that it will send dockets to the cluster and not a specific node? Or maybe what I am trying to figure out is how I can have fscrawler successively try each of the nodes within the pool of nodes till it finds one which isnt currently down?
Beta Was this translation helpful? Give feedback.
All reactions