Glossary:
Pull means downloading a container image directly from a remote registry.
Push means uploading a container image directly to a remote registry.
Load takes an image that is available as an archive, and makes it available in the cluster.
Save saves an image into an archive.
Build takes a “build context” (directory) and creates a new image in the cluster from it.
Tag means assigning a name and tag.
Contents
- 1 Comparison table for different methods
- 2 1. Pushing directly to the in-cluster Docker daemon (docker-env)
- 3 2. Push images using ‘cache’ command.
- 4 3. Pushing directly to in-cluster CRI-O. (podman-env)
- 5 4. Pushing to an in-cluster using Registry addon
- 6 5. Building images inside of minikube using SSH
- 7 6. Pushing directly to in-cluster containerd (buildkitd)
- 8 7. Loading directly to in-cluster container runtime
- 9 8. Building images to in-cluster container runtime
Comparison table for different methods
The best method to push your image to minikube depends on the container-runtime you built your cluster with (the default is docker). Here is a comparison table to help you choose:
Method Supported Runtimes Performance Load Build docker-env command only docker good yes yes cache command all ok yes no podman-env command only cri-o good yes yes registry addon all ok yes no minikube ssh all best yes* yes* ctr/buildctl command only containerd good yes yes image load command all ok yes no image build command all ok no yes
- note1 : the default container-runtime on minikube is ‘docker’.
- note2 : ’none’ driver (bare metal) does not need pushing image to the cluster, as any image on your system is already available to the kubernetes.
- note3: when using ssh to run the commands, the files to load or build must already be available on the node (not only on the client host).
1. Pushing directly to the in-cluster Docker daemon (docker-env)
This is similar to podman-env but only for Docker runtime. When using a container or VM driver (all drivers except none), you can reuse the Docker daemon inside minikube cluster. This means you don’t have to build on your host machine and push the image into a docker registry. You can just build inside the same docker daemon as minikube which speeds up local experiments.
To point your terminal to use the docker daemon inside minikube run this:
Now any ‘docker’ command you run in this current terminal will run against the docker inside minikube cluster.
So if you do the following commands, it will show you the containers inside the minikube, inside minikube’s VM or Container.
Now you can ‘build’ against the docker inside minikube, which is instantly accessible to kubernetes cluster.
To verify your terminal is using minikube’s docker-env you can check the value of the environment variable MINIKUBE_ACTIVE_DOCKERD to reflect the cluster name.
More information on docker-env
2. Push images using ‘cache’ command.
From your host, you can push a Docker image directly to minikube. This image will be cached and automatically pulled into all future minikube clusters created on the machine
The add command will store the requested image to $MINIKUBE_HOME/cache/images, and load it into the minikube cluster’s container runtime environment automatically.
minikube refreshes the cache images on each start. However to reload all the cached images on demand, run this command :
To display images you have added to the cache:
This listing will not include the images minikube’s built-in system images.
For more information, see:
- Reference: cache command
3. Pushing directly to in-cluster CRI-O. (podman-env)
Remember to turn off the imagePullPolicy:Always (use imagePullPolicy:IfNotPresent or imagePullPolicy:Never), as otherwise Kubernetes won’t use images you built locally.
4. Pushing to an in-cluster using Registry addon
For illustration purpose, we will assume that minikube VM has one of the ip from 192.168.39.0/24 subnet. If you have not overridden these subnets as per networking guide, you can find out default subnet being used by minikube for a specific OS and driver combination here which is subject to change. Replace 192.168.39.0/24 with appropriate values for your environment wherever applicable.
Ensure that docker is configured to use 192.168.39.0/24 as insecure registry. Refer here for instructions.
Ensure that 192.168.39.0/24 is enabled as insecure registry in minikube. Refer here for instructions..
Enable minikube registry addon:
Build docker image and tag it appropriately:
Push docker image to minikube registry:
5. Building images inside of minikube using SSH
Use minikube ssh to run commands inside the minikube node, and run the build command directly there. Any command you run there will run against the same daemon / storage that kubernetes cluster is using.
For Docker, use:
For more information on the docker build command, read the Docker documentation (docker.com).
For CRI-O, use:
For more information on the podman build command, read the Podman documentation (podman.io).
For Containerd, use:
For more information on the ctr images command, read the containerd documentation (containerd.io)
For more information on the buildctl build command, read the Buildkit documentation (mobyproject.org).
To exit minikube ssh and come back to your terminal type:
6. Pushing directly to in-cluster containerd (buildkitd)
This is similar to docker-env and podman-env but only for Containerd runtime.
Currently it requires starting the daemon and setting up the tunnels manually.
ctr instructions
In order to access containerd, you need to log in as root. This requires adding the ssh key to /root/authorized_keys..
Note the flags that are needed for the ssh command.
Tunnel the containerd socket to the host, from the machine. (Use above ssh flags (most notably the -p port and root@host))
Now you can run command to this unix socket, tunneled over ssh.
Images in “k8s.io” namespace are accessible to kubernetes cluster.
buildctl instructions
Start the BuildKit daemon, using the containerd backend.
Make the BuildKit socket accessible to the regular user.
Note the flags that are needed for the ssh command.
Tunnel the BuildKit socket to the host, from the machine. (Use above ssh flags (most notably the -p port and user@host))
After that, it should now be possible to use buildctl:
Now you can ‘build’ against the storage inside minikube. which is instantly accessible to kubernetes cluster.
7. Loading directly to in-cluster container runtime
The minikube client will talk directly to the container runtime in the cluster, and run the load commands there – against the same storage.
For more information, see:
- Reference: image load command
8. Building images to in-cluster container runtime
The minikube client will talk directly to the container runtime in the cluster, and run the build commands there – against the same storage.
For more information, see:
- Reference: image build command