-
Notifications
You must be signed in to change notification settings - Fork 391
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
make random id generation more performant #654
base: main
Are you sure you want to change the base?
make random id generation more performant #654
Conversation
Co-authored-by: Roman Shtylman <[email protected]>
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). View this failed invocation of the CLA check for more information. For the most up to date status, view the checks section at the bottom of the pull request. |
Interesting. You'll need to sign the contributor agreement for us to merge this btw. But I do wonder if we should be using the native crypto uuid implementation here? Is that significantly slower than this approach? Just looking at this code made me fearful of collision and a true uuid might be a bit more robust? |
I don’t think that is necessary at all. The ID only needs to be unique for the lifespan of a request/response pair. Before this PR it used 128bit of randomness, this PR reduced that to 32bit of randomness. I suspect that’s still more than enough. Also the UID is sent as a string in every message, so keeping it short is desirable. That being said, I am not sure that this PR makes a measurable difference wrt performance. |
Wouldn't it be enough then to just increment an integer and use that as the id? E.g. something like let nextId = 1;
function getNextId() {
nextId = nextId > Number.MAX_SAFE_INTEGER ? 1 : nextId + 1;
return nextId;
}
Granted, the speedup is not very significant but it depends a little bit on your usage (and what endpoints you use). For high frequency communication it is noticeable, but if you send lots of data it will probably not be noticeable as most time is spend on The boxplot below compares the duration for calling a workers |
I had that originally, but IDs have to be generated on both ends, and there’s no easy way to have a shared counter. So maybe it’s actually fastest do generate random part once and concat a counter? Leaving it up to you and @benjamind to decide whether this is worth landing, but I guess in the end it is actually a reduction in complexity, too. So I am in favor. |
Could you elaborate on this? From what I see the ID is generated only on the proxy side. At least I don't see |
This can happen when you send callbacks from the main thread to the worker using |
This PR introduces a simpler random id generation which is much more performant, which is especially noticeable when sending many requests in a short time.