-
Notifications
You must be signed in to change notification settings - Fork 247
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question ] Why are Business Central artifacts not stored in the Microsoft Artifact Registry? #3736
Comments
To be honest that's a problem since for ever. At some point, somebody at Microsoft decided to make docker image creation a runtime problem. Because there were i guess too many permutations. That's why you have the docker image, the application artifact and the platform artifact downloaded. Docker itself does a lot to reduce the amount that needs to be downloaded when a new version is published. But using artifact negates that. Just as a sidenote I realised that even when using only the compiler directly AL-Go will still download the full artifact just to extract 148kb. So a 2.5 GB file is downloaded to get 148kb. That's a efficient system if I've ever seen one. But as for a solution. We switched to a weekly build artifact this makes the downloaded artifacts you have longer relevant. And then regarding actually caching of the docker images that depends if you have multiple machines that you want to access the cache or if it's sufficient to store the image on each server individually. Drawback for caching it on every server is that it takes longer to fill the cache because it needs to be filled on every server. Advantage is that the setup is far simpler and probably more reliable. For us the biggest improvement was to use the weekly artifact instead of the latest because even if your caches are correctly setup they don't help a lot if the thing you want to cache changes multiple times per day. And then just set the cacheImage property when starting the container. |
We actually started out creating images for all versions of NAV/BC - but the amount of images that had to be rebuilt every time there is a security update of one of the components involved was simply too big. We are investigating what we can do differently - I would have loved to use universal packages, but unfortunately - they cannot be anonymous:-( We will investigate what we can do. @jonaswre - for the compilerfolder feature - we need the compiler + all the apps. In the meantime - the compiler is now available as a NuGet package and the apps are as well - so we will be looking at optimizing this piece - one thing at a time:-) |
@freddydk That's awesome to hear! I was quite surprised to find out that just compiling a BC App takes 15 minutes on a GitHub-hosted runner. Apps + Compiler is about 100 MB. I've had my share of frustrations with Business Central and Microsoft recently, which led me to think about ways the system could be improved. One of the most problematic design decisions, in my opinion, is the weakening of the separation between runtime, platform, and the base app. Ideally, the only thing that should need to be deployed with a Docker image or artifact is the runtime itself, which would significantly reduce the number of permutations. All code written in AL should be distributed and deployed in the same way, regardless of the publisher—likely through a package manager, as is common in most programming languages. Additionally, the infrastructure for handling pre-releases would need to be streamlined, particularly for partners who want others to extend their apps. Which Microsoft should classify aswell. If this approach had been considered from the start, many of the necessary tools would likely have been developed earlier. Furthermore, the app.json file should explicitly list all direct dependencies rather than relying on the "application" and "platform" properties. Versioning an environment shouldn't be tied to individual apps (including the base app) but rather to the runtime. This would mean that runtime apps only need to be compiled for the runtime, not for every new major or minor version of the base app or platform. On a related note, it's worth considering the decision to develop AL—a language that feels somewhat under-engineered—rather than using one of the many established programming languages out there. For instance, how long did it take for Java to support multiple interface implementations or generics? I'm sure some of these ideas may be easier said than done, but if Microsoft doesn't have the resources to do it right, who does? |
The Application dependency is an indirect dependency, which loads the Application.app - which then has dependencies to base app, system app (and recently also foundation app). That was introduced in order to overcome several challenges with the prior approach and will likely not go away anytime soon. Maybe, once all embed apps are normal apps and all localizations are normal apps and the base application has been restructured - then this should happen - but I wouldn't bet on it. On the AL Language, the decision to go with this comes from a long investigation, which included the need for having something, which we could convert C/AL (NAV language to). All attempts to convert that to C# or other languages failed with unreadable code and lack of security when we need to host arbitrary code in multitenant services online. On docker images - we started out creating specific images - one image per version, meaning that every new onprem version would yield 20 new images. At any given time - we support 18 different versions of Business Central - but many more versions are used on a daily basis. I still see people spinning up old NAV containers. Back in the days, we supported multiple different OS' (ltsc's and sac's) - I think 6 at the most. Back then, this was 20186 different images = 2160. For sandbox images, it was more countries and more builds - probably in the area of 750-1000 images per day. The big problems comes when the base images was updated, SQL Server, PowerShell, dotnet, etc. had security patches etc etc. - We are not allowed to use images with security vulnerabilities, so we would have to update every single image out there every month (some times multiple times a month). The number of images was so big, that when I setup a series af Azure VMs to recreate them all on mcr.microsoft.com - it went down due to traffic. Today, we have 3 supported OS' - and I re-generate generic images when they are needed. Not sure where this ends - it could be that the latest versions are available as images and older versions as artifacts, but we also need to find a different way to store artifacts as anonymous blob storage won't be allowed for us in Microsoft in the future. If we were on Linux - we could store artifacts in container registry (the alpine base image is 5Mb). On Windows, the smallest image is nanoserver on 250mb and that also gets updates all the time. So... - we are considering what to do... |
Have you looked into OCI artifacts on e.g. an Azure Container Registry? |
We are using OCI tags to mark images as stale - but I haven't looked at it in other ways. |
Reading about OCI artifacts and ORAS - it definitely sounds like something we could use to replace our storage accounts and re-use the infrastructure and tooling behind registries. This might be just what I have been looking for... |
@freddydk Thanks for your explanation. From an outside perspective, it seems odd that Microsoft has certain apps with special privileges. Many of the challenges we face could be resolved if everything were treated equally. For example, if Microsoft had to release all of their apps through AppSource without any special treatment, they would likely encounter the same issues we face and address them sooner. It’s a common problem when the tools you develop aren’t the ones you use regularly. Regarding the AL language, I believe more time and resources should be dedicated to improving it. It's quite telling that every AL developer has experienced the need to frequently "Reload Window" to restart the language server due to bugs. The fact that the language server struggles with performance when working in the base app is concerning, especially given that the base app isn’t even particularly large by modern standards. With all the daily frustrations we encounter while working with BC, seeing announcements for shiny new AI features can be frustrating. It’s hard not to wonder why fundamental issues that impact our day-to-day work aren’t receiving more attention. Combined with ongoing frustrations with Teams, Outlook, Yammer, buggy admin panels, and long loading times, it feels like Microsoft has lost focus on the core issues that matter most and is instead relying on an "advanced Clippy" to solve everything. |
I will forward your frustration. |
Thanks maybe that helps. I had a one on one with Søren Alexandersen. He said I should discuss that on yammer. But my yammer show a 404 for the dynamics community lately. If you need some ideas how to improve the language let me know. I have a bare bones TreeSitter implementation for AL if there is interest I would be willing to open source it. |
Hi and Sorry if this is the wrong repo.
I'm just wondering why they are not stored in the official registry?
is there a way to get them from a different registry so we can cashe them in our proxy registry?
Currently the time of our build pipeline is greatly inflated because of the inherent image pull from the external "storageAccount" everytime.
My current solution would be to periodically pull an image and push it to our private registry, and pull it from there for building. Is there a better option?
The text was updated successfully, but these errors were encountered: