-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GLIBC_2.27 not found #78
Comments
You should be able to run locally, just like you did with “sam local invoke.” My shot in the dark guess is that the build environment has a later version of GLIBC than sam local’s emulator image based on this SO answer:
Can you tell me more about how you built the bootstrap executable? Which stack LTS (or otherwise) are you using? Are you building inside a docker image? If so which one? Hopefully we can track it down quickly. |
Here's my resolver: lts-16.31
extra-deps:
- envy-1.5.1.0
- hal-0.4.6
- megaparsec-9.0.1
docker:
enable: true Built with I'm not interacting with Docker at all beyond the configuration flag to Stack above. If it helps, some versions on my host machine:
|
All that makes sense, in that case the build image should be fpco/stack-build:lts-16.31 as I think it will default to that unless you explicitly specify one (I think that will print to console when building, so you can sanity check it if you’d like) I think the next question is what version of glibc is on that image. The host machine shouldn’t matter based on the config you’ve shown. Can you try:
I can give it a go myself later, but I’m restricted to my phone for the moment. |
|
As a sanity check I've deleted Is this part of the
|
|
Ah, yes, so that’s the image that the lambda is then running in, which sam local is selecting automatically. We should then run the same command for that image and see if it’s a mismatch—specifically if it’s a lower version. |
|
Tried this just in case:
|
Ah, yeah, that’s very likely our culprit then. Thank you for checking all this btw! For next steps, we’ll need to see about setting up an image that’s a better match. First, to validate the issue, and second to get some guidance and docs on how to resolve it. That’s unfortunate though, definitely an unfun wrinkle. I think the best bet is to use the amazon image as a base and then add stack. Trying to downgrade the fpco image probably just leaves the door open for the next mismatch. I’ve been steadily working to get hal more broadly compatible and get it into current stack LTSes, so definitely a problem I’ll be prioritizing. If you want to continue working on it and tag team a branch, the help would of course be welcome! |
Also worth noting that if you just need a quick fix and you can use an older stack LTS, you might try downgrading. I haven’t seen this issue before with older LTS images. Not a recommendation I’m happy with, haha, but if it unblocks you, it might be worth it until there’s a cleaner solve. |
Much downloading later I've (unsurprisingly) replicated this issue. I was able to create a Dockerfile that seems to build with the same version of glibc. This seems to get beyond that error and execute the binary. I'm getting some new error in decoding Context, but I'm not totally sure yet why. You can try it out on this branch #fix/sam-local-glibc-version-error now.
Still needs some digging and then quite a bit of polish, but some progress. |
Really appreciate your support on this mate, cheers. Yeah, I'd be perfectly content to downgrade my Stack LTS for now, do you have a known good version I can target? |
I’m beginning to think this is more an issue with sam/its version rather than the stack image. My Linux host has glibc 2.23 and runs just fine on actual lambda when compiled without docker. On MacOS, it seems to work without issue with lts-13.22, where the stack build also uses glibc 2.23. I seem to be using an older lambda image, back from lambci/lambda:provided. Since the current Amazon images only use 2.17, it doesn’t seem like a downgrade does the trick. I’ll dig more into sam though and see what I find. Definitely a priority to support its newest versions. |
Hmm, so unfortunately there seems to be a string of issues. I don't think sam has been supported since 1.0--though I'm not sure on the exact timeline. Even if the glibc issue where resolved (which looks a bit more complicated, due to the docker images being built somewhat dynamically), then sam currently doesn't pass in all the seemingly required values to build the context. Notably, the officially supported Rust runtime seems to have the same issues and not support sam after 1.0, so there seems to be a bit of the right hand not knowing what the left is doing. Other users have a similar glibc issue #awslabs/aws-lambda-rust-runtime#17. And both this and the Rust runtime expect that the traceId, functionArn, and deadline are passed in as headers, but sam does not send them. I'm not sure they've even realized that shortcoming yet. For both, it would be a breaking change or special branching when running locally. Notably, Rust's docs don't mention sam, and steer users instead to directly use the image that sam<1.0 used to use. I think for the time being, this probably needs to recommend the same while chipping away at some of the above issues. Here's their README section on docker usage: https://github.com/awslabs/aws-lambda-rust-runtime#docker You should be able to use the following to as a local invocation:
|
Running that command gives me:
The container seems to hang at this point. It did on the first run pull the image anew ( |
Huh, I'm a bit stumped. Can you try building the binary with LTS 13.22? I checked the glibc version in (I'm downloading the LTS 13.22 image now, but it's quite large, so I'll also post the result of building with it if my DL completes first). |
Success! LTS 13.22 successfully runs locally with the above Curiously
(Midnight where I am, catch you tomorrow 🙂) |
Awesome! I had just arrived at the same result myself 🥳 I feel a bit silly, I did know that Interestingly enough, the error you got locally (the GLIBC version error) also likely is meaningful, in that I'd expect you'd see the same error if you tried to run it on Lambda. The issue you see with Summary:
Glad we've got you running locally for the moment at least! Thanks much for the patience and troubleshooting with me :) |
Alright, got one README update about gotchas done, and #81 adds a Dockerfile and some docs to use for building out the latest LTSes. I'd be curious to get your review and see if that image works for you! |
I believe there's an issue with the documentation update where Aside from that, the good news: I was able to successfully build and invoke with 8.6.5 (LTS 13.22). The bad news: 8.10.4 (LTS 17.4) gives me the following build error:
|
Ah, good catch on the missing I did see the cryptonite error with 17.2 as well. And there's some guidance around this in cryptonite's README (and it's popped up in a bunch of their issues #haskell-crypto/cryptonite#324, #haskell-crypto/cryptonite#326, #haskell-crypto/cryptonite#332) so it appears like it's had a fairly broad effect. Seems like the docker image either needs gcc >4.9 (which it at least is a build only dependency), or we need to pass the flag to cabal. Seem like stack supports this: https://docs.haskellstack.org/en/stable/nonstandard_project_init/#passing-flags-to-cabal Let me know if this help uncover anything; I'll do some more investigation shortly too. |
I was able to get 17.4 to work by adding the cryptonite flags to my
However, this isn't ideal, because it's yet another thing for a user to have to do themselves. I'm also planning to look into getting the Dockerfile to use gcc 4.9.x (vs 4.8.5 that it gets via Then while writing this I realized that So the easiest answer yet may be to completely remove |
Can confirm that the cryptonite flag succeeds as a workaround on my end, successfully built and invoked on 17.4! 😃 |
(Didn't mean to close 😅) |
Would you have any interest in pushing that Dockerfile as a built image to Docker Hub? I've realised I need to use it in CI as well, so I could push it myself but I haven't actually needed to make any changes to it, and others may also find it useful. |
Just opened #83 which should also help a bit (and is just generally a good idea). It's not really going to be possible to predict how to make any and every possible package build in a docker container, so I'm not sure how tractable it is to try and keep up with it in this project. Though, something like cryptonite will be broadly used, so it still may make sense. It also may just make sense to add a section in the README with tricks like adding flags for common packages with built trouble. I'm not sure I'm ready to go much further than simply providing an example Dockerfile at this point. I think that's clearly pretty important at this point, so people are steered towards success, but I'm not sure I want to commit to maintaining such an effort until I have a better idea of what kind of commitment it would require to do well. |
Hi, I'm trying to apply the
Any hints on how to solve this 😓 ? |
Hi @MarcCoquand in Stack LTS 16.12 cryptonite is version 0.26, which shouldn’t need the flag to build correctly (and apparently can’t be included at all). Do you see an error if you omit the flag? |
Ah no, I scrolled through this thread too quickly without reading and thought this was a fix for glibc issue. I am having the error with GLIBC_2.27 not working so I downgraded to LTS-13.22 and then it worked again. Sadly, this broke haskell-language-server so I was looking for ways to change to the latest version again. I guess in order to make HAL work with the latest version of GHC, I'll need to publish a dockerfile with a distribution that comes with glibc2.27 that I can then point stack to? |
As an aside, does HLS work for you with a Dockerised Stack setup? I was under the impression there wasn't support for that (yet). |
Understood! Lots of troubleshooting in this thread, so the conclusion is sort of hard to draw. The README/etc should hopefully get everything learned, but probably best to eventually close this with a short summary for other folks who end up here after finding the issue.
Interestingly enough, it’s sort of the opposite. The lambda env doesn’t have 2.27, so we need a docker file that also doesn’t (unlike the default stack build images). There’s an example Dockerfile in progress on #81 that should do the trick. |
Awesome, can confirm that with the new dockerfile it built without a problem. Now my issue with haskell-language-platform and stack docker still persists though... But that's an issue with hlp. |
We are using somehow simpler solution. GLIBC problem doesn't exist if an "older" version of OS is used, for example, So we use a build container that is based on Take this example as an illustration:
I hope that it helps. |
I'd still really like to get a simple elegant approach to this long term, as I feel like does have an ergonomic impact. One thing I wanted to get down is that copyright/copyleft issues of complete static linking (like including glibc, musl, or etc) shouldn't be an issue for lambdas. This does get controversial, but practically, the copy right concerns are only a factor in distribution of statically linked software. Anyone running a lambda is just running the statically linked software, which is within the license. This means that with static linking, a brief warning about copyright concerns is probably above and beyond. More to do in looking down this path. |
Have I misinterpreted the instructions regarding viability of local invocation or is this a bug?:
This is on Arch with everything up to date. The build appears to succeed and
stack run
gives no such error (just missing AWS environment as you'd expect).The text was updated successfully, but these errors were encountered: