-
Notifications
You must be signed in to change notification settings - Fork 869
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question]: Why does the Initialize Containers step always mount the workspace in the container? #5005
Comments
@BasJ93, сould you please provide more details on your request? I recommend implementing a workaround to remove unnecessary sources at the end of your stage, or you could use the checkout task with the option |
No, I'm looking for a way where the Our use case (that we wish to get working) is running unit tests in an isolated container environment, so that we run a specific set of tests that require data placed in our custom image. Due to the workspace being mapped in to the container, we end up running more tests than we need, because the other tests in the project are also detected. We would very much like to not have to clean the workspace of the host, as our repository is quite large and would then require a full clone again. In conclusion: is it possible to configure the agent to not mount the @ivanduplenskikh Any comment? |
@BasJ93, apologies for the delay in getting back to you. I have been gathering the necessary details related to your question. The _work directory's mapping is by design behavior. You can refer to this documentation https://github.com/microsoft/azure-pipelines-agent/blob/master/docs/jobdirectories.md for a better understanding of why the _work directory is needed to keep working files and folders. Currently, there are no mechanisms in place to prevent this folder from being mounted in a container. However, you might find a workaround by either deleting unnecessary files in a prior step or by setting up a separate agent dedicated to testing. |
Describe your question
As the use of running a stage within a container is described as a way to always have a clean environment, it feels weird that multiple host folders are automatically mounted in the container. I would expect to just get a running container, and still need to do the checkout step within the container for example.
Based on this snippet of Agent code, it would not appear so:
Currently, this is problematic for our application of containerized stages, as the stage suddenly contains test dlls (and application code) from a different stage, which we do not want.
Have we missed a knob/argument/feature flag that allows us to disable this behavior?
Versions
Azure Devops 3.243.1 on Ubuntu 20.04 LTS
Environment type (Please select at least one enviroment where you face this issue)
Azure DevOps Server type
dev.azure.com (formerly visualstudio.com)
Operation system
Ubuntu 20.04
Version controll system
Git with external repo
Azure DevOps Server Version (if applicable)
3.243.1
The text was updated successfully, but these errors were encountered: