-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request: Run as Lambda #231
Comments
So, I forked the library and played around with running it on my own as a lambda; here are a few things that are making it difficult out of the box:
Would you accept pull requests to change the behavior of any or all of these? |
Hi, Again, sorry for the late reply. The nature of a serverless function is to have no state. Bramble holds a lot of state about the services it federate. From a purely functional perspective I see the benefits of running low-traffic graph resolvers in a serverless environment, but for Bramble it would introduce some major architectural changes. At current, I'm not able to see how these can be addressed without severely implicating the performance of bramble. I've outlined a concern with the serverless approach below. Concerns
|
We've been running bramble in a lambda for a few months now, on a fork that allows us a bit more access to the internals, and it seems to work great. Lambdas aren't actually stateless, except insomuch as they can be killed under low traffic. In practice, a lambda can live for up to 15 minutes before a new instance is spun up, and AWS handles pre-warming that instance for you if there's existing traffic to the service. So long as Bramble doesn't rely on persisting something to disk, or very expensive-to-populate caches, I don't see it being fundamentally incompatible. We've set it up so that it fetches the schema on startup and every couple of minutes, though I've been meaning to add the option to load the constituent / assembled schema from disk or S3 on startup instead of querying it from downstream APIs; this would be a lot faster on startup than querying each service, and would rarely change. I think the ideal setup would be:
|
Do you know how the cold-start problem would translate to other cloud providers other than AWS? I think, we should be cloud provider agnostic and relying on the specific internals of AWS, would not be the right way to go. That said, I think you suggestions go a long way, to make sure that doesn't happen.
I do like option to load a pre-assembled schema. Preferably moving the schema assembler into its own package, and creating a seperate tool for schema assembly.
Having Bramble re-fetch the schema from a service if it encounters some kind of schema related error would be a great addition. Though we have to be careful and protect the downstream services with circuit breakers, back-off and all this.
If the above things is implemented I think this would be kinda redundant. Since we'd issue a service schema re-fetch when encountering a mapping error in the first place. This re-fetch would happen before we even tried to query the downstream service. I this should be spilt into multiple issues tackling the pre-assemble and schema re-fetch issues individually. Since each have their own caveats and problems. What do you think of that? |
Well, we'd attempt the query once, receive an error from one of the downstream services that the query didn't make sense with the schema, and then refetch the schema. So the idea behind the rebuild URL would be to be a bit pre-emptive/optimistic for speed, but at scale it's likely not going to get there before someone triggers it anyway, so yea, it's probably not needed.
Yea absolutely; I'd probably suggest a total of 4 issues, which I'm happy to create and describe if you agree:
|
Hello!
Very very excited about this project, I've been wanting something like this for a long time.
I would love to be able to run the bramble serve as an AWS Lambda behind an API Gateway; I can see three paths to achieving that (from least desirable to most):
The first I can achieve without anything from the project, but is fairly brittle in my experience. Any plans (or apetite) to do the latter two, or to accept a pull request for either?
The text was updated successfully, but these errors were encountered: