-
-
Notifications
You must be signed in to change notification settings - Fork 10
feat: make use of Nix files to set up dev env #22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Kicked off a build to see if the image built from this branch works https://github.com/joyeecheung/devcontainer/actions/runs/20376852332 (when it's done it'll be published to https://hub.docker.com/r/joyeecheung/node-devcontainer) |
Co-authored-by: Joyee Cheung <joyeec9h3@gmail.com>
Dockerfile
Outdated
|
|
||
| # Setting up direnv for the local clone | ||
| ARG USE_SHARED_LIBS=false | ||
| # As much as possible, we want to use the defaults set in shell.nix so the DX is consistent for users of devcontainers and users of Nix. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel that we still need to figure out how to add extra configuration flags to this instead of saying "Don't" - at some point the CI use case is bound to diverge from the local development use case e.g. in terms of whether to compile with -g - might not be very critical in the CI, or may be a bit excessive, but it's critical for local debugging - it would be better for the devcontainer to ship the build cache with the debug symbols, otherwise the image is quite limited for the development use case. Can we split that with different arguments, so that we can have different images built for different use cases?
Also, after this lands, is there way to test build configurations without landing a change in the upstream first? I feel that if changing the image building process requires changing the upstream, it could end up creating a circular dependency that would be difficult to break if this image is going to be used in the upstream for tests somehow, which is the primary reason why we are trying to unify the two. Can we update the flow to allow a bit more flexibility, say, instead of always pulling from the upstream and creating a dependency in the building process, we save a copy of those scripts here and add a daily workflow to update them if necessary? That way to break up the circular dependency for testing, we can just update one of the copy in a PR, and sync up later after they get merged. Or, we can introduce some kind of overriding config here that can be used to update the two independently to break up the circular dependency?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
at some point the CI use case is bound to diverge from the local development
100%, CI already diverges by passing extra args and by opting out of e.g. devTools and benchmarkTools, see https://github.com/nodejs/node/blob/607a741941994a5f27774a807ef36149963cb821/.github/workflows/test-shared.yml#L191-L195
IMO the defaults in shell.nix should provide an "ideal dev env", because it's much easier for CI to override arguments than to expect all users to fiddle with passing custom flags.
it would be better for the devcontainer to ship the build cache with the debug symbols, otherwise the image is quite limited for the development use case
I agree, and my point is that whatever flag would make sense as default in devcontainer also make sense as default for Nix users, because the usecases of both systems overlap almost perfectly IIUC (and CI is a different beast, but can then use non-default options).
Regarding your concern about a possible circular dependency situation, I don't foresee any problem; we can override the env variable Nix is creating for us, or just re-run ./configure with different flags – but also, updating (e.g. by pulling a different commit) the *.nix files locally would also be picked up, so even for a CI use case that should be fine.
Passing flags when building the image is possible, and I'm not against it if there are use cases for it, my point is we should resist it so we don't end up maintaining two different sets of defaults if we can avoid it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
because it's much easier for CI to override arguments than to expect all users to fiddle with passing custom flags.
I am more thinking about the case of testing something locally. For example, if one wants to update the devcontainers to change the configure option in the image or add a new image variant (e.g. we have the shared build here, suppose we want to add a without-intl variant), current they just need to update the scripts in this repo, do a docker build with the arguments locally to see if it builds, or use a github action in their fork to publish to a custom dockerhub repo to test the resulting image. After this is merged, does that mean they have to submit a PR to nodejs/node first to support an additional variant via a new *.nix file, get it to merge, before trying the docker build/github action locally? And if that turns out to be not working with the docker arguments, would that result in more back and fort in nodejs/node just to iterate on it, which can lead to many 7-day delays in the iteration loop when reviews are scarce?
I see that it seem to be possible to solve if if we parameterize the *.nix being pulled so that it points to a fork, as suggested - in that case, can we add it to the scripts here and test it with a github action, to confirm that the new workflow works? (I have a fork in https://github.com/joyeecheung/devcontainer/actions that I use for testing, and happy to help testing it; or feel free to set up your own fork if you have a dockerhub account, publishing is free, it should only require a signup, the secrets and variables that need to be configured can be seen in https://github.com/nodejs/devcontainer/settings/secrets/actions)
Sorry for the back and forth, by the way - I do hope this could land so that we can unify the dev dependencies, especially the package management (which is going to be trickier and tricker now that we have temporals and stuff), just want to confirm that it's still possible to iterate on the configs without having to add an upstream merge into the iteration loop.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Nix files have been written with the intent of supporting any build option (but since I may be the only user of them besides CI, so there might be a blind spot, but I would consider that a bug). E.g., for a build without-intl, you're supposed to pass --arg icu null; for small ICU, that would be --argstr icu small; and for many flags, there is no custom flag, you're supposed to pass it as extra, e.g. --arg extraConfigFlags '["--enable-asan" "--without-inspector" "--without-node-options"]'.
But it's worth having in mind that even though Nix tries to be helpful and set an CONFIG_FLAGS env variable, it won't fight you if you decide to override it and set it to any other value – which I find important when one just want to hack around.
I guess the TL;DR is:
- changes to Nix files should not be necessary (unless there's a bug / blind spot)
- if there is bug / blind spot in Nix files, fixing it is not a blocker, one can always set the env variable "the old way".
I see that it seem to be possible to solve if if we parameterize the *.nix being pulled so that it points to a fork, as suggested - in that case, can we add it to the scripts here and test it with a github action, to confirm that the new workflow works?
I don't really understand that part 🤔 My take away from #23 (review) was that we should not expect the devcontainer in the wild to contain a single commit, but to be mounted on an active clone, probably slightly ahead of the devcontainer. So if my understanding is correct, there's nothing to add to the script, the Nix files would be at the HEAD of the clone. That was what I meant by "pulling a different commit", I was talking about local changes after the image was built and mounted.
I guess I have trouble foreseeing a use-case for needing to update the Nix files before building it, so it's hard to think about solutions without understanding correctly the concern.
Another way of looking at this is that used to be only 1 variation before this PR. Now there are two. What I'm trying to say is that I'm not convinced that "it hurts customization" is a valid point; what it does is that it raises the bar for contribution, which is a tradeoff that we may or may not be willing to make.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I think I see what you mean now. Take adding --without-intl for example, to do an integration test on it with a PR here:
- Update the Dockerfile so that in the PR, it adds
--arg icu nullto L52 below when a docker argument (passed from a new github action yml in the same PR) is true, or some some equivalent of this - Kick off a github action from a fork to build the image without intl
- The resulting image should have build artifacts without intl, then it can be tested
If my understanding is correct, can you add some documentation to for this either in the comments or in README.md, since it's not so straightforward now as the source of truth of "what can be customized" still lies in a different repo?
Dockerfile
Outdated
| # Setting up direnv for the local clone | ||
| ARG USE_SHARED_LIBS=false | ||
| # As much as possible, we want to use the defaults set in shell.nix so the DX is consistent for users of devcontainers and users of Nix. | ||
| RUN echo "use nix --impure -I nixpkgs=/home/developer/nodejs/node/tools/nix/pkgs.nix$([ "${USE_SHARED_LIBS}" = "true" ] || echo " --arg sharedLibDeps '{}' --argstr icu full")" > /home/developer/nodejs/node/.envrc |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you put this into a file that gets copied? I think we can probably do something like this (probably can use some merging with the direnvrc above):
ARG IMAGE_VARIANT=dev
COPY ./nix/envrc /home/developer/envrc
RUN cp "/home/developer/nix/${IMAGE_VARIANT}.envrc" /home/developer/nodejs/node/.envrc
And add a dev.envrc and shared.envrc to the ./nix directory so that instead of echoing them inline which is a bit hard to read, we just use file-based configs for different images.
I first attempted to create the Docker image using Nix tools, but that ended up being too complicated, in the end it was easier to install Nix from the Dockerfile, and update the existing scripts to use Nix provided tools.
Worth noting the image contains a
nodebuilt with all shared libs, similar to what buildstest-sharedGHA workflow on nodejs/node; we could certainly disable that to use the vendored static deps instead, the tradeoff being the build would take longer.Fixes: #18