SQR-064
Building the Rubin Science Platform JupyterLab container#
Abstract
This document describes how the sciplat-lab container, which provides the JupyterLab implementation for the Rubin Science Platform, is built.
If you just want to know how to do custom builds, skip ahead to Modifying Lab container contents.
Container architecture#
The dual-Python-environment design of the sciplat-lab container is described in sqr-088.
Repository#
The assembly instructions for the Lab container reside at the LSST SQuaRE GitHub sciplat-lab repository.
Layout#
There are several different categories of files in the repository directory.
Dockerfilecontrols the build.staticholds files copied as-is into the container.scriptsholds scripts run inside the container at various stages of the build.README.mddescribes the build process.
Branch Conventions#
Standard Lab containers (that is, dailies, weeklies, release candidates,
and releases) are built from the main branch. Experimental
containers may be built from any branch. The build process enforces
this condition, and will force the tag to an experimental one when
building from any other branch than main.
Build process#
The build process is usually run from GitHub Actions.
It is, however, also possible to build the container locally.
GitHub Actions input parameters for build#
tag– mandatory: this is the tag from which to install the DM Stack, e.g.w_2026_14.If it begins with a
v(indicating a DM Stack release version) thatvbecomes anrin the output version.supplementary– optional: this (with a prepended_) is the text added to the end of the container tag (exp_ will be automatically prepended).Usually this should describe the reason for the experimental container’s existence, e.g. a value of
rjeused on a build from version tagd_2026_04_22would yieldexp_d_2026_04_22_rje.image– optional: this is the URI for the image you’re building and pushing.It defaults to
us-central1-docker.pkg.dev/rubin-shared-services-71ec/sciplat/sciplat-lab,ghcr.io/lsst-sqre/sciplat-lab,docker.io/lsstsqre/sciplat-lab. As the default makes plain, it may be a comma-separated list of URIs, in order to push to multiple targets. If you are building an experimental image, you probably want only keep the target that’s being used for the environment you’re testing in. For the IDF you should keep thepkg.devtarget and remove theghcr.ioanddocker.iotargets. At USDF, only keep theghcr.iotarget.push– boolean: if this is a YAML-false string (including the empty string), the container will be built but not pushed.This is useful if you are testing build machinery but do not care about the contents of the container.
input– optional: this is the name, and any tag prefix, of the input image you’re starting with.It defaults to
ghcr.io/lsst-sqre/nublado-jupyterlab-base:latest. You almost certainly only want to change the tag. If you have built a modified base image (by opening a PR against Nublado) and changing things insidejupyterlab-base), then the tag will be something liketickets-DM-53996. The tag is case-sensitive, and dashes will replace slashes in the branch name.
Multiplatform support#
The GitHub Action uses multiplatform build-and-push to push a multiplatform image with both amd64 and arm64 containers.
A minimal but effective RSP instance including the Lab, the Portal, a TAP server, and Gafaelfawr, can be run entirely on ARM now (demo.lsst.cloud, although not always running, serves as an example).
There may be some less-core-to-the-RSP services that require x86_64 architecture.
Other architectures and operating systems are not supported.
Local build#
Set ARGS for tag, input, and version.
The tag and input are as described above.
To calculate version from the tag, set the environment variables tag, image, and supplementary (all as described above), source the helper functions, and use the output of calculate_tags | cut -d ',' -f 1.
This works on both amd64 and arm64 Linux.
Please do not push a locally-built image. GitHub Actions will handle putting the images into the repositories they need to go to, and will also manage creation of multiplatform images.
Retagging an image#
There is also a process for retagging an image. We typically use this for moving the recommended tag, but it can be used to add an arbitrary tag to any image.
This too is a Github Action.
Github Actions input Parameters For Retag#
tag is the tag on the sciplat-lab input container, not the upstream DM Stack tag (for the case when the input tag represents a weekly build, they are identical, but for release versions, the DM Stack tag v<something> will have become the sciplat-lab tag r<something>).
supplementary is the new tag to be applied to the image.
No substitution is done.
It is mandatory in the retag case.
image retains the same meaning and default: it is the target repository (or comma-separated list of repositories) to which the new tags should be pushed.
Manual retag#
Please don’t. You should never be pushing a local build. Since you’re not pushing it, you don’t need to tag it in any particular manner. Just run the container under whatever tag you built it as.
Lab build process#
When we run docker build, the following sequence of events takes place.
We copy
/etc/passwdand/etc/groupinto place and generate their corresponding shadow files. This is necessary for the package installation in the next step to run, asdpkgassumes that accounts that either came with the base system or were installed by dependencies stayed installed; it uses them during installation.We run
scripts/install-system-packages; this first updates installed system packages, as thejupyterlab-baseimage may be out of date, since it will date to the last release ofNublado. Then we add system packages that we want in the RSP image, but not in the base container. Currently that’sssh,quota, andemacs-nox.We copy most of our static files into the container.
We install the DM Stack from the supplied
tagusing lsstinstall, then strip it down as much as possible.We install remaining static files whose operation depends on the stack.
We run install_rsp_user to create the environment in which an RSP user works. This is the piece you are most likely to need to modify. There are several phases of installation for the RSP User environment.
First is conda installation of
rubin-env-rsp. This is a conda metapackage, defined in the feedstock recipe. Its version is pinned to the version ofrubin-envinstalled withlsstinstall, and we do not update dependencies. However, that doesn’t mean that rebuilds of the same version separated by time will necessarily have all the same package versions. Although the DM Stack installation does lock its direct dependencies, it lets its transitive dependencies float. This has not been a huge operational problem thus far, but it is a lurking source of anxiety.Next we add four Telescope and Site packages from their conda channel, which is not conda-forge. We also do not update dependencies for those. It would be very helpful if these were put onto conda-forge and then just added to
rubin-env-rspbut neither SQuaRE nor T&S wants to accept responsibility for their maintenance as conda-forge packages, so we seem to be stuck with the status quo.Next, we have a few tech debt steps. The desire to support old releases and have reasonable reproducibility of a particular release is at odds with CST’s desire to add new features to old releases at will. We are currently (April 2026) in negotiations with Build Engineering and Pipelines to require a new point version of an old release if additional functionality is requested. That would eliminate these steps, as packages would just be added to a backport branch of the feedstock and a new point release then made.
Until we are done with the 29.x series of releases (post-DP2), we must check to see whether
lsdbhas been installed already, and if not, installlsdb,hats, andcdshealpix.Then we do the same with
reproject.If we had packages we had to
pipinstall into the conda environment, we would do it here. Right now we do not and we hope to hold the line; pip-installing on top of conda leads really, really easily to dependency hell.Here endeth the tech debt steps.
We install
lsst-rspseparately. We have deliberately chosen to exclude it fromrubin-env-rspbecause we want to be able to iterate it more quickly than we update therecommendedimage, and it is a SQuaRE-maintained package.We do conda cleanup.
Now we switch to the UI Python environment and perform needed modifications there.
We deactivate the JupyterLab console (basically a fancy iPython REPL; apparently some people don’t want to use a notebook for that).
Finally we install the spellchecker, which is very strangely designed and implemented, and which is included at CST’s request.
We install a small compatibility layer designed to ease the transition from the one-python model to the current model which separates the UI and payload Python environments. This can very likely go away soon.
We generate manifests for what’s been installed. This turns out to be very useful for debugging regressions, because generally only a handful of packages change day-to-day, and the nature of the regression usually gives a good clue as to which package is the likely culprit.
Finally, we clean up files left behind by the build process.
Modifying Lab container contents#
This is probably why you’re reading this document.
It’s very likely that whatever changes you make will go into the
install-rsp-user script.
Conda changes#
The most common scenario is that you want to try out a new conda package for inclusion before you add it to rubin-env-rsp.
In that case you’d add your new package to install-rsp-user in between installation of rubin-env-rsp and lsst-rsp.
When you’re happy with the package, create an RFC proposing the addition of the package to rubin-env or rubin-env-rsp (depending on whether the functionality is RSP-specific or more general).
Combine that with a PR to rubinenv-feedstock.
Follow the “Updating rubin-env-feedstock” instructions found there.
Other changes#
You could also change the static file contents, rename them, or modify Dockerfile.
Testing changes#
To test any of these changes, push them to sciplat-lab in a branch, and then trigger the build action from your branch rather than main.
Our recommendation is to use the latest weekly tag to test build machinery changes, but any container tag will do.
Pick a supplementary tag, restrict the push targets as appropriate, and change the tag on the image field; then run the action.
Test that image, which you will find down near the bottom of the dropdown list in the experimental builds section on the Hub spawner page. Note that it may take up to five minutes from the completion of the build for the new image to be available in the dropdown list.
Building from a different base container#
It is quite possible that rather than changing anything in the payload environment, you want to modify the UI environment.
That is encapsulated in jupyterlab-base, a part of Nublado.
Installing a package from GitHub#
The most common change to the base container is that you want to install something that is not available on PyPi–either it has not been released as a Python package yet, or it resides on a branch and you’re testing it before merge.
In this case, you must edit the jupyterlab-base pyproject.toml and then update uv.lock.
Edit pyproject.toml, and add a
[tool.uv.sources]entry at the bottom.As an example, if you wanted to install
rsp-jupyter-extensionsfrom branchtickets/DM-54736, you’d add the following:[tools.uv.sources] rsp-jupyter-extensions = { git = "https://github.com/lsst-sqre/rsp-jupyter-extensions", branch = "tickets/DM-54736" }
Then run
uv lock --upgrade-package, e.g.uv lock --upgrade-package rsp-jupyter-extensions.Commit the changes to your Nublado branch and push them.
Create a PR from the branch and mark it as draft; you’re never going to merge it, but the PR means that the corresponding containers will be built.
As mentioned above, the container tag will replace slashes with dashes, but respect capitalization; thus in the above example, the container tag you would use for your input image would be
tickets-DM-54736.
After you’ve done that, you should go to the sciplat-lab actions page and start a build.
If you were working on a jupyterlab-base change, you can use main as the sciplat-lab action branch, as long as you remember to change the input container tag.
Other changes#
Other changes to the base container can be carried out as well. The layout is very similar to sciplat-lab, except that the various static files to be copied into the container are not (yet) consolidated.
Scripts#
If you are working on the shell scripts (for either jupyterlab-base or sciplat-lab), please use four-space indentations and convert tabs to spaces.