Over the last few months I have been playing around with a Home Assistant instance in Docker to run all of my home automation and over time I have slowly added more and more to it (more integrations, more services, more devices, etc.).
Some info about my setup before we get into it.
- Host machine running Windows 10 Pro
- Docker Desktop using WSL2 backend
- .wslconfig configured with 30GB Ram and 5 Processors
Since I don’t run Home Assistant Operating System I don’t have the ability to use the supervisor and install services through that which means I’ve had to setup my own docker containers for addtional services. The containers I have running here are:
- Home Assistant - The standalone Home Assistant Core install
- PostgreSQL - Database storage for Home Assistant recorder (instead of using filesystem)
- Node Red - Low-code programming tool for wiring up events
- Mosquitto - An MQTT message broker (to allow devices to use a publish/subscribe model of messaging)
- Tasmoadmin - Administration platform for Tasmota devices (to allow OTA updates of devices without direct internet access)
- Chrony - A Network Time Protocol (NTP) Server (to allow devices without internet access to synchronise time)
- InfluxDB - A Scalable datastore for metrics, events, and real-time analytics (used for long term storage of sensor data)
- Grafana - Open source data visualisation platform (to create dashboards from the data in InfluxDB)
So let’s take a look at the docker-compose.yml file:
Let’s start with a few of the common parts to it.
- container_name - I’ve given all of the containers specific names instead of the default naming of
<folder name>_<service name>_1
- networks - All the containers have been added to a custom network called
localnetwhich is also defined at the top level with the driver set to bridge. This is to make sure all of the containers can properly talk to each other and get around some network oddities I was experiencing with my setup.
- restart - All of the containers are also set to always restart so whenever docker starts up they will also startup. So if the host is every restarted the containers will come back automatically.
- depends_on - I have a few containers that set the depends on to ensure other containers are successfully loaded before them but this isn’t necessary.
Now into the containers themselves…
The main Home Assistant Core image, here I have given it a specific version so that I can easily see which version I’m using and update when I need to. Using latest here would work as well but may need to make sure if pulls down latest instead of using the existing cached version.
I also have setup some mapped volumes for this so I can keep my config files in a folder on the Windows host machine.
/c/local-docker/home-assistant/config:/config This one maps the Windows folder
C:\local-docker\home-assistant\config to the
/config directory within the container which is where the main config yaml files are kept. This allows me to more easily edit the config files from the host machine as well as persisting the config across rebuilds of the container. The other volume entry is for media which allows me to easily add media to the server for local playback on media players. The TZ environment variable here sets the timezone of the container to make sure it’s running in my local timezone.
I’ve gone with using a PostgreSQL instance here for the recorder because my home assistant container has it’s config folder as a mounted volume which on Windows has a bit of a performance hit and since I run these folders from my local C Drive I wanted to make sure my home assistant wasn’t slowing anything else running on the host machine too much. The recorder integration also supports MySQL, MariaDB and SQLite (default), more info here: https://www.home-assistant.io/integrations/recorder/. The container I actually use is in a separate docker-compose file because I use it for other things as well but I included it here for completeness.
I am not setting a version for this image so it will just use the latest available when it is started (I haven’t had any issues with this so far). To connect the node red instance to home assistant you need to make sure you install the relevant home assistant packages:
Running node red in docker like this I found that installing packages from the UI didn’t install the latest from npm, because it installed them from the node red flows library (https://flows.nodered.org/). To install the latest versions here I had to run the npm install commands in the
/data directory on the container itself.
This container has a volume setup for the /data directory so that I can backup the config and the flow json files and state is again persisted between rebuilds of the container.
Using the eclipse-mosquitto broker for MQTT (latest version works fine here) because it seems to be the best supported one for home assistant. For this one I have 3 different volumes to map the config, data and log directories to local folders, mainly so I can backup the config and ensure state is persistent between container rebuilds.
The image for this is no longer being uploaded to docker hub which is why it is pulling from github packages (ghcr.io/tasmoadmin/tasmoadmin:v1.8.0). The mapped volume here lets me keep a backup of my config and devices as well as helping to persist state between container rebuilds.
Not too much to configure for this one, there is an environment variable to set the time server to use (NTP_SERVERS) this can be set to what ever you want. I went with time.windows.com because my host machine is running windows and wanted to have the 2 running against the same time server.
Took me a little bit to decypher the docummentation to get the right config I needed for this one since I hadn’t used influxdb before but got all the environment variables setup for it to work. Most of these are pretty straight forward for things like username, password, org and bucket, DOCKER_INFLUXDB_INIT_MODE=setup is used to ensure the automated setup is run the first time it is started. I also have a volume mapped here so that it persists between container builds mainly so that I don’t have to go about reconfiguring the integration with home assistant which requires the organisation id (not the same as the org environment variable) and the API token (generated through the automated setup).
This one I haven’t used too much yet but so this config may change as I use it more. At the moment I just have a mapped volume for backups and persistant state.
Since these services are all installed manually via docker I miss some of the integration points that comes with installing them through the HA supervisor. One of those is adding them to the side menu in home assistant, to do this we can make use of the
panel_iframe integration in home assistant, here is an example of that integration in configuration.yaml.