As a developer, I've absolutely loved getting to grips with Home Assistant, figuring out how to use it, and building my own basic Pyscripts, automations, and so much more. Yet there was one goal I wanted to achieve at some point shortly after I started using it: building my own custom integration. It's an entirely different beast, comparatively, yet when I received the reTerminal E1002 and began to experiment with it, I had an idea for an integration that I thought would prove useful, both for the community and my personal needs. And that integration is ComfyUI.
For the uninitiated, ComfyUI is a node-based Gradio GUI with the goal of allowing users to generate AI content in an incredibly versatile way. You can build an entire workflow and access it over the web, or run it locally on your own PC. However, ComfyUI also has an API that you can access, and with the recent Home Assistant 2025.10.0 release adding supporting for a new "Generate Image" AI Task, I was quite surprised to learn that there wasn't a ComfyUI integration already out there. In fact, very few integrations in general support it, and even the documentation is lacking.
So, I set about creating my own integration, allowing me to generate images from Home Assistant, using data from my home to build out the prompt, and then save those images so that they can be displayed on the reTerminal E1002. And if you want to use ComfyUI with Home Assistant, I've open-sourced my integration so that you can install it, too.
This is my first time building a Home Assistant integration, and I was able to piece this together from looking at the Google Generative AI integration and the Azure AI integration. Those examples proved invaluable, but as such, my approach to the development and design of this integration may not be the right way of doing things.
Building our ComfyUI integration
There are very few image generators built for Home Assistant so far
As already mentioned, the most valuable help I was able to get for this was the Google Generative AI integration alongside the Azure AI integration that I found on GitHub. On top of that, the Open Home Foundation have unit tests that they've published for image-related AI tasks, and it was a combination of all of these that made it possible to build this integration. It currently does not support attachments (for image to image workflows), and can only generate an image from a text-based prompt.
In the above screenshot, you can see that I've exported the example workflow for Stable Diffusion 3.5 as a JSON file compatible with ComfyUI's API, and this file will be incredibly important. This serves as the most important part of the integration, and will need to be uploaded to your Home Assistant server. I put mine in /config/comfyui. Furthermore, you'll need to pay attention to the node numbers for the seed, width and height, and the prompt. Because the workflow is static, the seed will always remain the same if we don't change it, and we'll certainly need to customize the prompt when calling it from Home Assistant. Setting up the integration will ask you for the number that matches each of these nodes. For example, in the above screenshot, the prompt is node number 16.
The next step was working out how to actually use the API, but thankfully, that part is mostly simple. It's a POST request with the established workflow file to the /prompt endpoint on your server hosting ComfyUI. All I needed was a way to get the image back as a response and actually save it, rather than the AI task holding onto it temporarily, as those only last an hour. I built a simple script, separate to the integration, in order to save images to a publicly available endpoint. Home Assistant offers this functionality natively, allowing you to host anything accessible through /config/www. With all of that said, you can always open up ComfyUI and access your historically generated images.
As for why I built this in the first place, to be honest, it's nice to be able to build, test, and run this on my own home network without any requirement for an external API. The beauty of it is that you can use sensors from your home to build the prompts, so you could do it based on the number of people that are home, the devices that are on or off, or the weather conditions in your current area. They're completely dynamic prompts, and paired with the reTerminal E1002 (or any similar display), it can look great and be a unique stand-out in your home.
Setting up our ComfyUI automations
Bringing it all together
Setting up and adding ComfyUI works, but automating it still requires additional steps. To use our ComfyUI integration with the reTerminal E1002, we need three things:
- A script to download our images from the temporary Home Assistant path
- An automation that triggers when the images are downloaded to that path
-
A script to quantize to those images to a six-color palette, saving as a 24-bit BMP image using the Floyd-Steinberg dithering method
- I believe this is the best way to show images on this display, as Waveshare recommends a similar method using Photoshop for their 7.3-inch Spectra 6 display. Pyscript just allows us to automate the process.
The first script uses the Downloader integration, the second automation uses the Folder Watcher integration, and the third script to quantize the images is built using Pyscript. My script to download images will be available on GitHub, as will the automation and the image quantizer. The quantized image is required as the reTerminal E1002 uses a Spectra 6 display,
For downloading our images, our script does the following:
- Calls the "generate image" task in Home Assistant, using our ComfyUI task and a prompt
- Waits for the response, then saves the temporary URL to a variable
- The Downloader integration downloads the image from the saved variable appended to our local Home Assistant URL, and saves it to /config/www/ai_images
Now that this is ready, the last part is our Pyscript to quantize the images for displaying on the display.
We use an automation that's triggered by the "Folder Watcher" integration, as it gets activated when a file is saved to the folder. The event that fires from Folder Watcher can be used to automate our Pyscript, passing the absolute location of the image to the script for processing. The Pyscript is fairly simple but is only necessary for Spectra 6 displays specifically. If you intend to use this with any other kind of display, you don't need it, and can just use the PNG files that are saved to /config/www/ai_images.
Now we're on to the last, and thankfully, easiest hurdle: the reTerminal E1002.
Pulling our images to the reTerminal E1002 using ESPHome
The easy part
Throughout this, I've been saving the images as "today_(number).png", as we can easily increment and loop the number from one through to five in order to match how many images there should be in a day. The reTerminal E1002 has 8MB of PSRAM, which is more than enough for small, 800x480 images, so we simply use the "online_image" component to point to our image, and update the URL with the incremented index that we use to track what image we're showing. Once the image is downloaded and shown, we contact an MQTT server to save our index (as it would be reset otherwise), and invoke ESPHome's deep sleep component, which dramatically extends battery life.
As for where we get the images from, we have persistence thanks to the fact that our images are stored in /config/www/ai_images in Home Assistant. By storing them here, we simply access them by navigating to "http://homeassistant.local:8123/ai_images/today_(number).bmp". These are our converted, ready-to-use images that can be shown on a Spectra 6 display, and we simply put the device to sleep until we need to wake it up again to change the image. With this, we can expect almost a month of battery life, as it only wakes up once an hour to update the display and then go back to sleep again. This can be extended up to nearly three months with increased intervals of an update every six hours.
And that's it! Through a custom ComfyUI integration, an ESPHome configuration to pull images from Home Assistant, a Pyscript to convert images, all tied together with automations, we can now generate dynamic images based on data taken from Home Assistant and display them on our dashboard. This has arguably been the biggest and most ambitious project I've undertaken in a long time, and I'm amazed at just how well it works. Below are the two GitHub repositories that contain everything you need to get started:
- ComfyUI custom component (you can install it with HACS)
- Automation, scripts, and ESPHome
You just need to set up ComfyUI, note the node values for seed, prompt, and resolution, upload your JSON file to your server, and reference the path in the setup. At the bare minimum, you'll then have AI image generation working locally in a way that Home Assistant understands, and you can go from there to build your own automations and scripts that utilize it.