Press "Enter" to skip to content

Category: computers

Adobe Illustrator Preventing macOS Sleep?

Ever since getting my new monitor (a Dell U3225QE — a nice IPS LCD after some OLED issues) I’ve been having problems with it not going to sleep. But that’s not usually a monitor problem, especially as I could manually put it to sleep… So what’s keeping macOS from putting it to sleep?

Well, thankfully with pmset one can see what’s going on:

c0nsumer@mini ~ % pmset -g
System-wide power settings:
Currently in use:
standby 0
Sleep On Power Button 1
autorestart 0
powernap 1
networkoversleep 0
disksleep 10
sleep 0 (sleep prevented by backupd-helper, powerd, backupd, coreaudiod, coreaudiod)
ttyskeepawake 1
displaysleep 10 (display sleep prevented by CEPHtmlEngine)
tcpkeepalive 1
powermode 0
womp 1
c0nsumer@mini ~ %

There we go, seems CEPHtmlEngine is preventing the display from sleeping. So what is it?

c0nsumer@mini ~ % pmset -g assertions | grep CEPHtmlEngine
pid 15995(CEPHtmlEngine): [0x00038aae00059926] 46:44:27 NoDisplaySleepAssertion named: "Video Wake Lock"
c0nsumer@mini ~ % ps aux | grep 15995
c0nsumer 15995 3.4 0.1 412316000 64256 ?? R Sat08AM 128:30.24 /Applications/Adobe Illustrator 2025/Adobe Illustrator.app/Contents/MacOS/CEPHtmlEngine/CEPHtmlEngine.app/Contents/MacOS/CEPHtmlEngine b27716d6-c14c-49e4-8612-b5ab9de9bdf4 1103d4a0-8756-40b2-af81-5646ba80756f ILST 29.8.4 com.adobe.illustrator.OnBoarding 1 /Applications/Adobe Illustrator 2025/Adobe Illustrator.app/Contents/Required/CEP/extensions/com.adobe.illustrator.OnBoarding 32 e30= en_US 1 -11316397 0
c0nsumer 58444 0.0 0.0 410724448 1472 s002 S+ 7:14AM 0:00.00 grep 15995
c0nsumer@mini ~ %

Really? Illustrator? Huh… I have been working on a new map of Bloomer Park (in anticipation of the forthcoming Clinton River Oaks Park changes) for CRAMBA and leaving it open in the background… I guess that’s it.

And strangely, closing and re-launching Illustrator made the assertion go away. And now the problem is gone.

Oh, Adobe…

At least it’s easy to tell why it was happening.

(This is Adobe Illustrator v29.8.4 on macOS Sequoia 15.7.3.)

Comments closed

OLED… Not for me.

When I switched from an iMac to a Mac mini in late 2024 I choose an ASUS ProArt 5K PA27JCV (24″, 60 Hz) for the monitor and while it looked great, it died after 14 months, seemingly with a backlight or power supply problem. ASUS’ warranty support requires shipping the monitor back, potentially waiting 3-4 weeks, and then getting a replacement. And worse, the replacement could have dead pixels, as the ASUS warranty doesn’t consider ≤5 dark pixels a problem.

The old HP ZR2440w that I swapped in as a spare wasn’t cutting it, so with an indeterminate wait ahead of me, potentially receiving something with bad pixels, and my being vaguely interested in something larger and with a faster refresh rate I went looking at new monitors.

Coming to the realization that 4K is probably fine I picked up a Dell 32 Plus 4K QD-OLED Monitor – S3225QC from Costco for $499. It was well reviewed online and looked pretty good when I played with one for about 20 minutes at Micro Center. When I got home and sat in front of it doing my normal things it looked a bit… different… almost as if my glasses weren’t working quite right. But I figured new monitor tech just needed some time for me to get accustomed to. After all, it had a very high contrast ratio and sharp pixels; maybe it’s just that?

After a few days it still didn’t feel right, so I began looking for a solution. Costco has a 90-day return window for computer monitors, so I had some time, but this didn’t look good; I wanted an answer soon.

I was fortunate to be able to borrow a Dell UltraSharp 32 4K USB-C Hub Monitor U3223QE for the weekend, which was perfect as being a being a high end display with the same resolution and panel size as the S3225QC I could compare them side by side. And in the end the LCD just looked better.

I took some macro photos of both displays and it turns out that what was bothering me was fringing, a problem common to OLEDs. It was hard to point out during normal use other than text-is-a-bit-blurry-and-weird , or like an oversharpened image, or almost like artifacted text in a JPEG image, but with photos it was much easier to see what’s going on. And better, the cause: the arrangement of the subpixels; the little red/blue/green dots that make up a pixel.

As shown above, the subpixles in the Dell S3225QC QD-OLED form a square with green on the top, a larger red pixel in the lower left, and smaller blue in the lower right. The Dell U3223QE, a typical LCD, has three vertical stripes making a square. The result being that high contrast edges look very different on an OLED, often with a strong off-color border — or fringe — along horizontal and vertical lines.

In the photos above, note the vertical part of the 1 which has red and green dots along its right side, and large red dots along the top of the 6 with green along the bottom. These are the strongly colored fringes. (On the LCD they appear white as the three equal size subpixels pixels act equally.)

This meant that things that I tend to do, text or fine lines in maps or CAD-type drawing, are not right at all on the pixel pattern found in this OLED panel. Beyond the pixel pattern, I also suspect that the much crisper pixels (defined points of light) contribute to the fringing having an artifacting-like effect.

This was much more pronounced when looking at light text on a dark background; the way that I read most websites. Visual Studio Code does a wonderful job demonstrating this problem:

This gets at why OLEDs make great TVs and gaming monitors. The contrast is outstanding, color is excellent, and high refresh rates are ideal for moving images and fast-response games. And there’s no noticeable fringing because edges are constantly moving across pixels; almost nothing is still. They also work great on small devices like phones where the pixel density is so high that fringing is too small to see.

But on desktop monitors for still things — text and fine lines — OLEDs currently just aren’t great; I guess that’s why office and productivity type monitors are still LCDs. Even though I don’t like being that person who returns computer stuff just because they don’t like it, I ended up returning the monitor after only four days of using it. The S3225QC and it’s QD-OLED just doesn’t work for me; it made my eyes feel funny to use.

Within the past few weeks LG has announced RGB stripe OLED panels which will resolve this problem, but there aren’t currently any monitors available using these panels, so back to an LCD I’ll go. (It looks like ASUS and MSI will some them available soon, but only as wide-screen gaming monitors. And I suspect the first ones available will be fairly expensive.)

Whether this’ll be buying my own U3223QE, perhaps a Dell U3225QE (adds 120 Hz scanning, an ambient light sensor, and a Thunderbolt dock), or just waiting for an ASUS PA27JCV to come back, I’m not sure… But whatever I end up using will, for now, will be an LCD, not an OLED.

Comments closed

Home Assistant as Personal Device Tracker

Last two years of my phone’s location, as gathered by Home Assistant.

Part of our Home Assistant (HA) setup uses the Companion Mobile App for easy remote control and to collect data from our devices. The main tracked item is the phone’s location, so HA can tell if we’re home or not, and currently I only use it to change how some lighting automations work.

I also have HA set up to log all device state data (switches, outlets, climate sensors, power consumption) to a local instance of InfluxDB, and then have Grafana installed so I can visualize this data.

Mains Voltage via Home Assistant from February 2024 through December 2025.

My original use for this was long-term logging of temperature and humidity sensor data — which is neat to see — but as I’ve experimented with graphing things like mains voltage. This was neat because it made it easy to see things like how voltage drops and becomes erratic during summertime cooling periods. And showed that grid voltage jumped up by ~2VAC in March 2025, around which time I recall DTE doing utility work on the grid just north of our house. (Yes, evidence of them improving things locally.)

Late on Christmas evening, wanting some time to just sit alone and do things, I put together a map showing where my phone had been. I’m pretty happy with how it came out, as I can now input a time range and dots will appear for each logged location, color-coded with geopositioning accuracy (brighter green is more accurate).

I also used this as another exercise in working with LLM tools like ChatGPT. I’m (finally?) realizing how useful this can be when thought of as a modern search engine. There’s still constant reminders of how imperfect and problematic results can be, but with a domain background it’s helpful. I find that thinking of these tools as tireless (yet emotionless) junior employee who makes lots of mistakes and needs all responses tested and vetted works… decently… in pointing me in a decent direction.

But I digress… Here’s the query that’s the main point of this and makes it all go:

SELECT "latitude", "longitude","gps_accuracy"
FROM "homeassistant"."autogen"."state"
WHERE "entity_id" = 'pixel_8'
AND $timeFilter
AND "gps_accuracy" < 100
Last 30 days of phone location data.

It was then simply a matter of putting this into a Geomap that displays a point for each location, and colors it based on gps_accuracy state and looks decent. I was even able to place it all on the Thunderforest Landscape map tiles which shows OSM-mapped trails and has been oh-so-useful on my RAMBA Trails Map.

Initially I looked at a heat map, but it didn’t seem as useful as individual points. I may explore this later, but the device where I’m currently running HA is a bit under-powered for this. And note that the query above excludes records that have a GNSS accuracy worse than 100 meters as this generally means that GPS (et al) wasn’t working at all and geopositioning likely came from local mobile towers (which shows me as being on tall local buildings, in fields I’d never visit, etc).

While obvious in retrospect, the most notable things this shows me is that when I’m driving — typically running OsmAnd+ or Google Maps (or both) — the recorded points are high accuracy and frequent. When riding my bike, carrying my phone idly in a pocket, the GNSS sensor is likely PRIORITY_PASSIVE so the dots are both infrequent and low accuracy.

It’s also just neat to look at. Things jump out like a trip to IKEA in Canton, riding at Island Lake and Holly Wilderness, etc.

I’m curious to see what I can further tease out of the logged data. The HA Companion mobile app can get all sorts of interesting info via its Sensors. For example, the Activity Sensors on iOS automatically detect:

  • Stationary
  • Walking
  • Running
  • Automotive
  • Cycling

And on Android:

  • in_vehicle
  • on_bicycle
  • on_foot
  • running
  • still
  • tilting
  • walking

Plus there’s things like what’s being done with the device, what’s seen about its environment (including visible wireless networks, Bluetooth devices), etc…

It might be neat to see what more I can get out of this. Or it might just end up as a nudge to decrease what HA is collecting (and possibly purge some of it from the db).

Of course, it pales in comparison to what the telcos, device manufacturers, OS vendors, and app vendors can do with their data engineers, massive troves of data and ability to cross-reference, etc. (A bit of a reminder that phones are just behavior-trackers that also make calls and take pictures…)

I hope to soon try migrating this HA instance from a Raspberry Pi 4B to a higher-powered slim PC. While I don’t intend to take this much further, it will provide more power for chewing on data like this and will hopefully let me figure out a disaster recovery plan for HA that includes preserving all logged data. When first setting up this map I tried to draw both a location and heatmap and this was a little too much for the Pi and as it ground to a halt Kristen noticed that the back yard lights weren’t turning on properly. Doh! Or I guess I could just do the processing on another machine…

Comments closed

OpenSCAD Is Kinda Neat

Designing a simple battery holder in OpenSCAD.

Earlier this year I designed a very basic box/organizer for AA and AAA batteries in Autodesk Fusion, making it parameterized so that by changing a few variables one could adjust the battery type/size, rows/columns, etc. This worked well, and after uploading it to Printables earlier today I realized that reimplementing it would probably be a good way to learn the basics of OpenSCAD.

OpenSCAD is a rather different type of CAD tool, one in which you write code to generate objects. Because my battery holder is very simple (just a box with a pattern of cutouts) and uses input parameters, I figured it’d be a good intro to a new language / tool. And in the future might even be better than firing up Fusion for such simple designs.

After going through part of the tutorial and an hour or so of poking, here’s the result: battery_holder_generator.scad

Slicer showing the Fusion model on top and OpenSCAD on bottom.

By changing just a few variables — numRows and numColumns and batteryType — one can render a customized battery holder which can then be plopped into a slicer and printed. No heavy/expensive CAD software needed and the output is effectively the same.

Without comments or informative output, this is the meat of the code:

AA = 15;
AAA = 11;
heightCompartment = 19;
thicknessWall = 1;
numRows = 4;
numColumns = 10;
batteryType = AA;

widthBox = (numRows * batteryType) + ((numRows + 1) * thicknessWall);
lengthBox = (numColumns * batteryType) + ((numColumns + 1) * thicknessWall);
depthBox = heightCompartment + thicknessWall;

difference() {
    cube([lengthBox, widthBox, depthBox]);
    for (c = [ 1 : numColumns ])
        for (r = [ 1 : numRows ])
            let (
                startColumn = ((c * thicknessWall) + ((c - 1) * batteryType)),
                startRow = ((r * thicknessWall) + ((r - 1) * batteryType))
            )
            {
                translate([startColumn, startRow, thicknessWall])
                cube([batteryType, batteryType, heightCompartment + 1]);
            }
};

Simply, it draws a box and cuts out the holes. (The first cube() draws the main box, then difference() subtracts the battery holes via the second cube() as their quantity and location (via translate()) is iterated.

That’s it. Pretty neat, eh?

(One part that confused me is how I needed to use let() to define startColumn and startRow inside the loop. I don’t understand this…)

While this probably won’t be very helpful for more complicated designs, I can see this being super useful for bearing drifts, spacers, and other similar simple (yet incredibly useful in real life) geometric shapes.

Comments closed

Wireshark 4.6.0 Supports macOS pktap Metadata (PID, Process Name, etc.)

Four years after my post on doing network captures on macOS with Process ID, Wireshark 4.6.0 has been released which includes support for parsing this extra metadata, including the process info.

So how do you do it? Easy! You just need the pktap interface parameter.

From the tcpdump(1) man page:

Alternatively, to capture on more than one interface at a time, one may use “pktap” as the interface parameter followed by an optional list of comma separated interface names to include. For example, to capture on the loopback and en0 interface:

tcpdump -i pktap,lo0,en0

An interface argument of “all” or “pktap,all” can be used to capture packets from all interfaces, including loopback and tunnel interfaces. A pktap pseudo interface provides for packet metadata using the default PKTAP data link type and files are written in the Pcap-ng file format. The RAW data link type must be used to force to use the legacy pcap-savefile(5) file format with a ptkap pseudo interface. Note that captures on a ptkap pseudo interface will not be done in promiscuous mode.

Therefore, we just need something like:

tcpdump -i pktap,en0 -w outfile.pcapng

or

tcptump -i pktap,all host 192.168.0.6 -w outfile.pcapng

And then open outfile.pcapng in Wireshark and under Frame Process Information you can find the process name, PID, etc. (See screenshot above.)

Filtering can be done with frame.darwin.process_info as listed here. For example:

frame.darwin.process_info.pname == "firefox"

or

frame.darwin.process_info.pid == 92046

This is super helpful to figure out both what unexpected network traffic is being generated by and the inverse, what a process is doing on the network. And now thanks to Wireshark 4.6.0 it’s even easier.

Comments closed

Windows 10/11 Drivers for Epson Perfection 3170 Photo Scanner

I have an Epson Perfection 3170 Photo scanner for years, it works great, and don’t want to replace it. Unfortunately, Epson hasn’t published drivers for Windows 10, 11, etc and choosing any of these OS’ from their download page either results in a blank set of downloads, or a suggestion to download the Epson Event Manager which… doesn’t contain drivers. Great, thanks Epson. (And no, a driver doesn’t get automatically installed by Windows Update, either.)

However, it turns out there is a way to get it working under Windows 10/11/etc, and that’s the point of this blog post: Choose Windows Vista 64-bit and download Scanner Driver and EPSON Scan Utility v3.04A, filename epson12180.exe (or get it from this mirror). This self-extracts to C:\epson\epson12180_twain_driver_and_epson_scan_utility_304a and includes a signed 64-bit TWAIN driver that works great on Windows 11.

Comments closed

Automated Private Mobile Phone Photo Backup (Android to Apple Photos)

After lots of years of using different photo organization packages, from the lovely (but expensive) Lightroom Classic to ACDSee Photo Studio, from Gallery to various manual things, I’ve mostly settled on Apple Photos on macOS. It seems to work well, handles pretty much every format under the sun, does the basic editing tasks that I use, and is sufficiently widely used to have good community support.

Because I use an Android phone and eschew public cloud provider backups there’s was no clear path for automatically importing importing the pictures I take to Apple Photos. But, it’s possible, and this writeup is to show shows the toolchain I use to do it.

The end result is that whenever I’m home I take a photo and it almost immediately appears in Photos. Or when I’m away and get back home, they automatically sync. Or if I am away for a while and want to sync my photos, I can VPN to home and sync. (I could make it sync from anywhere automatically, but I don’t yet because it gets really complicated when potentially uploading large amounts of data on mobile data, often in areas of poor connectivity.)

What I settled on was using FolderSync on Android to send the photos to a temporary (Inbox) folder on my NAS via SFTP. I then have Hazel watch this folder for newly arrived files, import them to Apple Photos, adding to a New Photos album, and finally putting a copy of the original in an archive folder.

Here’s how I configured this:

NAS / macOS

Due to the number of different NAS’ out there configuration of them is beyond the scope of this post, but in general what you need is a SFTP destination that’s accessible via your local network. I also then have the same area available via SMB to my Mac, and this share mapped automatically at login.

On here I’ve created two directories .../Pixel Backup/Sync Inbox/Camera which is the inbox for photos from the phone and .../Pixel Backup/Archive/Camera as a final, archival resting place for outside of what gets imported into Apple Photos, as a just-in-case backup.

(Using a separate folder for a new photo inbox radically improves performance of Hazel, because it then doesn’t have to watch a ~26GB / ~6000 file directory for changes.)

FolderSync

Create a folderPair (v2) to back up /storage/emulated/0/DCIM/Camera/ (Left account) to /Share/Pixel Backup/Inbox/Camera on my NAS via SFTP (Right account).

Under Scheduling, set a schedule for every 30 minutes, with Use WiFi checked, and limited to my home network under Allowed WiFi names. Under Sync options, check Instant sync and Only resync source files if modified (ignore target deletion).

The result of this is that when I’m home, within seconds of taking a photo, it appears on my NAS in .../Pixel Backup/Sync Inbox/Camera. Or when I’m away and take photos they’ll back up within 30 minutes of getting home.

(Yes, it’s possible to have FolderSync use SMB, but I prefer SFTP, so that’s how I set it up.)

Hazel

Create a Hazel rule to watch .../Pixel Backup/Sync Inbox/Camera for new files and import them into Photos:

If All of the following conditions are met

Extension is not tacitpart
Name does not start with .pending
Date Last Modified is not in the last 10 minutes

Do the following to the matched file or folder:
Import into Photos to album: New Photos
Move to folder: Camera

Files that are incompletely transferred (in flight or had an error) will have a .tacitpart extension if sync’d with a v1 folderPair, or will begin with .pending if a v2. This rule ensures that only complete files are processed, imports them into a new album called New Photos, and then moves them to .../Pixel Backup/Archive/Camera.

The 10 minute delay is needed because otherwise, Hazel will sometimes import a partially-written file resulting in a partial or corrupt image. This manifests as either a series of errors in Apple Photos about importing duplicates or the same photo imported multiple times, but incomplete, with the lower portion of the image corrupt and replaced with solid gray. (This delay could probably be reduced, but for now I’m using 10 minutes. I may reduce this in the future.)

(Note: I tried syncing photos into a single folder and having Hazel watch that for changes, but performance was very poor. I’m unsure of whether this was caused by using it over a network, it’s size, or the number of files to be parsed, although based on Hazel’s performace on a very full Download folder I suspect the latter. I didn’t bother to investigate further, as using the Sync Inbox architecture works out better, and makes it easier to troubleshoot and recover if something goes awry.)

Apple Photos

Apple Photos has a library containing all images, and these images may or may not be assigned to one or more albums.

My preferred workflow is to have individual albums for select projects, trips, or whatnot, and all other mobile phone images in a general Mobile Photos album. To facilitate this, Hazel puts all new photos from my phone into a New Photos album. I then periodically look at this album, sort the photos into the desired other albums, and then remove them from this album. (Or, sometimes, delete them entirely.)

While I could have everything just go into the main pool of photos, they are somewhat unsorted, and dependent on metadata or parsing of the image content itself for sorting For personal reasons I like to have each photos in an album of some sort, and I find that this inbox of sorts best matches how I like to manage my photos.

But the best thing is that photos I take using my phone while working on projects at home are immediately available in Photos and then backed up. And those taken while away get uploaded immediately upon returning or by connecting to VPN and telling FolderSync to sync immediately, “…on any available network connection”.

That’s It!

And, that’s that. I take photos on my phone, they automatically appear in Photos and get archived on my NAS, and then backed up. Effective, yet simple to use.

For what it’s worth, I also have some other sync tasks in FolderSync and Hazel to handle screenshots, images attached to or saved from messages (SMS, Google Chat, Facebook Messenger, Signal, etc), but it’s all done with similar flows to what’s above, so I’m not going to document them separately.


Update on 2025-Jun-20

I previously had issues with Photos where the Memories, Trips, and Featured Photos sections just didn’t work. All of these indicate that I need to add more photos, but I’ve got some 64,000 photos spanning 20+ years of EXIF dates and locations, and the People & Pets, Map, Handwriting, and Illustration sections/detections work fine, so I don’t think it’s an issue with quantity nor the ability to parse the photos. This was fixed at some point between when I originally wrote the article in late January 2025 and now.

Since writing the article I also ran into some issues with partial syncing of files, especially when the wireless connection was poor and copying was slow, so I added another rule to Hazel so it won’t sync files newer than 10 minutes which seems to have taken care of it.

Comments closed

ZOOZ ZSE44 Flat Lines at 0° (C or F)

I’ve been using, and liking the ZSE44 Temperature | Humidity XS Sensor, with one in the attic and another in the back yard. It seems to work well, has a long battery life, and works great at pretty-far distances. But today I ran into an interesting quirk: it will not report negative numbers.

We’ve had a hefty cold snap here in Southeast Michigan, and last night the lows were well below 0°F, but I noticed that Home Assistant (see screenshot above) flatlined at 0°F for the back yard temperature.

I ended up asking ZOOZ customer service about this, and I was told it’s not designed for freezing temperatures, and thus won’t report negative numbers. So even though it’s working well at quite-below freezing temps, has been for the last week, and is even working fine now at ~4°F (reporting strong battery and active communication), a firmware limitation keeps it from telling me the actual temperature.

So, regardless of whether you have it set to Celsius or Fahrenheit, 0° in that scale is the lowest it reports. If you are using one of these in Celsius mode, you can get a bit more range by setting it to Fahrenheit and converting the value, but it only goes so far. (And yes, I tested the inverse by switching the ZSE44 to Celsius mode and saw that Home Assistant wouldn’t show below the converted 32°F.)

I’m a little irritated by this, but as it’s rarely this cold here, and the sensor otherwise works fine, I’m not planning to replace it. It’d just be nice to know the actual temperature outside via this sensor. The technical specs say it only works for 40°F to 90°F, but with it in the shade on our back fence or in the attic I’ve seen accurate values well beyond that.

Adding to the weirdness, the FAQ for the ZSE44 says:

The ZSE44 uses a SHTC3 [link mine] digital humidity and temperature sensor. The sensor covers a humidity measurement range of 0 to 100% RH and temperature measurements range of -40 C to 125 C.

While emailing back and forth with ZOOZ support, and their support person claiming the limit is because of the components, and they aren’t open to changing the firmware because that’d put it outside of specs, I did some digging to validate their claims. It turns out:

So between the range for the sensor, the suggested batteries, and the Z-Wave chip, a range of -40°C to +60°C would be fully within spec for all components.

Thus, the currently stated limitation for the ZSE44 of 40°F to 90°F (~4°C to ~32°C) is radically narrower than what any of the individual components, including the Z-Wave chip and battery) are spec’d to operate at. And I’ve demonstrated that all the components work at a much wider range: typical lower Michigan weather.

This make me a bit more irked at the limitations of the firmware, but thankfully after a bit of email discussion this information was sent to their development team, so I’m hopeful they’ll recognize the disparity and correct things.

But for now, at least now I know what’s going on.

(I could probably go to a different temperature sensor type, but all the really wide range ones, such as are meant for monitoring chest freezers, aren’t Zigbee or Zwave, and I don’t really want to add another protocol to my Home Assistant setup… Maybe that’ll be a project for this summer. Maybe…)

NOTE: This post is accurate as of 2025-Jan-23, on ZSE44 HW v2.0 w/ FW v2.0.

Comments closed

Bambu Lab P1S on IoT VLAN

I recently picked up a Bambu Lab P1S 3D Printer for around the house. After staying away from 3D printing for years, the combination of a friend’s experience with this printer (thanks, @make_with_jake!), holiday sales, looking for a hobby, wanting some one-off tools, and a handful of projects where it’d be useful finally got me to buy one. Having done half a dozen prints, thus far I’m pretty satisfied with the output and think it’ll be a nice addition to the house.

This printer, like many other modern devices, is an Internet of Things (IoT) device; something smart which uses a network to communicate. Unfortunately, these can come with a bunch of security risks, and is best isolated to a less-trusted place on a home network. In my case, a separate network,or VLAN, called IoT.

Beyond the typical good-practice of isolating IoT devices to a separate network, I’m also wary of cloud-connected devices because of the possibility of remote exploit or bugs. For example, back in 2023 Bambu Lab themselves had an issue which resulted in old print jobs being started on cloud-connected printers. Since these printers get hot and move without detecting if they are in a completely safe and ready-to-go state, this was bad. I’d rather avoid the chance of this. And really, when am I going to want to submit a print job from my phone or anywhere other than my home network?

Bambu Lab has a LAN Mode available for their printers which ostensibly disconnects it from the cloud, but unfortunately it still expects everything to be on the same network.

I was unable to find clear info on working around this in a simple fashion without extra utilities, but digging into and solving this kind of stuff is something I like to do. So this post documents how I put a Bambu Lab P1S on a separate VLAN from the house’s main network, getting it to work otherwise normally.

The network here uses OPNsense, a pretty typical open source firewall, so all the configuration mentioned revolves around it. pfSense is similar enough that everything likely applies there as well, and the basic technical info can also be used to make this work on numerous other firewalls.

As of this writing (2024-Dec-19), this works with Bambu Studio v1.10.1.50 and firmware v01.07.00.00 on the P1S printer. This also works with OrcaSlicer v2.2.0 and whatever version of the Bambu Network Plug-in it installs. I suspect this works for other Bambu Lab printers, as the P1S has all the same features as the higher end ones (eg: camera) but I can’t test to say for sure. Also, everything here covers the P1S running in LAN Mode. It’s possible that things would work differently with cloud connectivity, but I did not explore this. So, insert the standard disclaimer here about past performance and future results…

Why can’t I just point the software at the printer?

To start, the release notes for Bambu Studio v1.10.0 have a section that says a printer can be added with just it’s IP, allowing it to cross networks:

Subnet binding support: Users can now bind printers across different subnets by directly entering the printer’s IP address and Access Code

This sounds like it’d solve the problem, and is a typical way for printers to work, but no… it just doesn’t work.

Despite having the required Studio and printer firmware versions I just couldn’t make it work. When trying this feature I’d see Bambu Studio trying to connect to the printer on 3002/tcp, but the printer would only respond with a RST as if that port wasn’t listening. Something’s broken with this feature, probably in the printer firmware. Maybe this’ll work in the future, but for now we needed another way…

Atypical SSDP?

On a single network the printer sends out Simple Service Discovery Protocol (SSDP)-ish messages detailing its specs, Studio receives these and lists the printer. But, SSDP is based on UDP broadcasts, so these don’t cross over to the other VLAN (subnet).

The SSDP part of a packet looks similar to:

NOTIFY * HTTP/1.1\r\n
HOST: 239.255.255.250:1900\r\n
Server: UPnP/1.0\r\n
Location: 192.168.1.105\r\n
NT: urn:bambulab-com:device:3dprinter:1\r\n
USN: xxxxxxxxxxxxxxx\r\n
Cache-Control: max-age=1800\r\n
DevModel.bambu.com: C12\r\n
DevName.bambu.com: Bambu Lab P1S\r\n
DevSignal.bambu.com: -30\r\n
DevConnect.bambu.com: lan\r\n
DevBind.bambu.com: free\r\n
Devseclink.bambu.com: secure\r\n
DevVersion.bambu.com: 01.07.00.00\r\n
DevCap.bambu.com: 1\r\n
\r\n

When Bambu Studio receives this packet it gets the address (Location:) of the printer from the Location section, connects, and all works. But in a multi-VLAN environment we have different networks and different broadcast domains and a firewall in between, so we need two things to work around this: getting the SSDP broadcasts shared across networks, and firewall rules to allow the requisite communication.

These also don’t seem to be normal SSDP packets, as they are sent to destination port 1910/udp or 2021/udp. It’s all just kinda weird… And this thread on the Bambu Lab Community Forum makes it seem even stranger and like it might vary between printer models?

Regardless, here’s how I made this work with the P1S.

Static IP

The P1S (and I presume other Bambu Lab printers) have very little on-device network configuration, receiving network addressing from DHCP. I suggest that you set a DHCP reservation for your printer so that it always receives the same (static) IP address. This will make firewall rules much easier to manage.

SSDP Broadcast Relay

To get the SSDP broadcasts passed between VLANs a bridge or relay is needed, and marjohn56/udpbroadcastrelay works great. This is available as a plugin in OPNsense under SystemFirmwarePluginsos-udpbroadcastrelay, is also available in pfSense, or could be run standalone if you use something else.

After installing, on OPNsense go to ServicesUDP Broadcast Relay and create a new entry with the following settings:

  • enabled:
  • Relay Port: 2021
  • Relay Interfaces: IoT, LAN (Choose each network you wish to bridge the printer between.)
  • Broadcast Address: 239.255.255.250
  • Source Address: 1.1.1.2 (This uses a special handler to ensure the packet reaches Studio in the expected form.)
  • Instance ID: 1 (or higher, if you have more rules)
  • Description: Bambu Lab Printer Discovery

On my OPNsense firewall, where igb1_vlan2 is my IoT network and igb1 is my LAN network, the running process looks like: /usr/local/sbin/udpbroadcastrelay --id 1 --dev igb1_vlan2 --dev igb1 --port 2021 --multicast 239.255.255.250 -s 1.1.1.2 -f

(Of course, in the event you have any firewall rules preventing packets from getting from the printer or IoT VLAN to the firewall itself — say if you completely isolate your IoT VLAN — you’ll need to allow those.)

Now when going into Bambu Studio under Devices then expanding Printers, the printer will show up. It may take a few moments as the printer to appear as the SSDP are only periodically sent, so be patient if it doesn’t appear immediately.

(Note that if other models of printers aren’t working, it may be useful to also relay port 1910. The P1S works fine with just 2021, so for now that’s all I’ve done.)

Firewall Rules

With Studio seeing the printer, and presuming that your regular and IoT VLANs are firewalled off from each other, rules need to be added to allow the printer to work. While Bambu Studio has a Printer Network Ports article, it seems wrong. I am able to print successfully without opening all the ports listed for LAN Mode, but I also needed to add one more that wasn’t listed: 2024/tcp.

Here’s everything I needed to allow from the regular VLAN to IoT VLAN to have Bambu Studio print to the P1S, along with what I believe each port to handle:

  • 990/tcp (FTP)
  • 2024/tcp to 2025/tcp (Unknown, but seems to be FTP?)
  • 6000/tcp (LAN Mode Video)
  • 8883/tcp (MQTT)

Nothing needs to be opened from the IoT VLAN, everything seems to be TCP and the stateful firewall seems to handle the return path. (Even though the Printer Network Ports article with it’s 50000~50100 range for LAN mode FTP implies active mode FTP…)

And with that, it just works. I can now have my Bambu Lab P1S on the isolated IoT VLAN from a client on the normal/regular/LAN VLAN, printer found via autodiscovery, with only the requisite ports opened up.

Missing Functionality? Leaky Data?

Note that there are a few functions — like browsing the contents of the SD card for timelapse videos or looking at the job history — which only work when connected to the cloud service. This really surprises me, as I can think of no rational reason why this data should need to be brokered by Bambu Lab.

Unless they want to snarf up the data about what you print and video of it happening and when and… and…?

Digging into that sounds worthy, but is a project for another time. It’s a pretty good reminder of why isolating IoT devices is good practice, though. For now I’ll just manually remove the SD card if I want access to these things. And consider if maybe I should completely isolate the printer from sending data out to the internet…

Citations

Big, big thanks to Rdiger-36/StudioBridge and very specifically the contents of UDPPackage.java. This utility which helps find Bambu Lab printers cross-VLAN by generating an SSDP packet, and sending it to loopback, saved me a bunch of time in figuring out how Bambu Lab’s non-standard SSDP works.

All the discussion around issue #702, Add printer in LAN mode by IP address was incredibly helpful in understanding what was going on and why this printer didn’t seem to Just Work in a multi-VLAN environment. This thread, and watching what StudioBridge did, made understanding the discovery process pretty simple.

And as much as I dislike the AGPL in general, it worked out really well here. I wouldn’t expect a company like Bambu Lab to release their software so openly, but with the AGPL they had to. Slic3r begat PrusaSlicer which begat Bambu Studio which begat OrcaSlicer giving us a rich library of slicers

Updates

2024-Dec-22: After this worked fine for a few days I ran into problems printing from OrcaSlicer where jobs wouldn’t send. Digging I found that 2025/tcp was needed as well, so I updated the article above. It seems this is another FTP port? It’d sure be nice if this was documented.

2025-Jan-10: I have further isolated the P1S by disallowing it access to the internet at all. Now, beyond having its SSDP requests forwarded to other VLANs, it’s wholly isolated to the IoT VLAN. This works great, and is basically a true LAN-only mode.

2025-Mar-08: After not printing anything for a while I ran into a problem with uploads would fail with a 500 error or so. I’m suspecting the printer lost it’s time and thus TLS was failing, as when I allowed the printer to talk DNS and NTP to the public internet everything got better. On every boot the printer resolves time.cloudflare.com and then queries it to set its time. (Unfortunately I didn’t save a screenshot of the error.)

Comments closed

HDMI-CEC to Onkyo RI Bridge

ESPHome device, a Seeed Studio XIAO ESP32S3 and level shifter with 3.5mm TS and HDMI connectors.

After getting the Onkyo RI support for ESPHome and Home Assistant in place, it was neat that I could turn my Onkyo A-9050 amplifier on and off remotely, but it wasn’t actually very useful; it didn’t save me any time/hassle. This iteration, adding HDMI-CEC support, brings it all together.

Back when I started this project, my main goal was to find a nice way to deal with toggling the power on the amplifier. Because I only use a single input on the amplifier and volume is already handled by the Apple TV remote, I don’t use the remote and it’s stored away in the basement. Normal practice was to manually press the power button on the front before using it, but this was irritating so I went looking for a better way, and the result was this project.

Initially I was looking at a way to use Home Assistant to coordinate powering the Apple TV and amplifier on, but it turns out there’s no good way to power up an Apple TV remotely; or at least not from anything that’s not an Apple device. I thought about going down the path of figuring out how the iOS / iPadOS does it, but the results of that would need to be incorporated into pyatv and chasing Apple’s changes was not a path I wanted to go down.

I then began thinking about it inversely: What if I could tell when the Apple TV woke and slept, and then take action based on that? After all, it’s already using the well-established Consumer Electronics Control (HDMI-CEC) to wake the TV… What if I could listen for that? And we’re always using the Apple TV remote when watching content and there’s no need to wake it while out of the room, so pressing a button on the remote to get things started is just fine.

Well, it turns out that was easier than I thought. Using Palakis/esphome-native-hdmi-cec, a HDMI-CEC component for ESPHome, and then doing a little protocol analysis I now have a device that:

  • Listens for the Apple TV to wake up and sends and sends a Power On to the receiver.
  • Listens for the Apple TV to go into standby and sends a Power Off to the receiver.
  • Sends events to Home Assistant whenever a broadcasted HDMI-CEC Standby (0x36) or Report Power Status (0x90) are received.
  • Exposes controls in Home Assistant for a variety of Onkyo remote control commands and broadcasting an HDMI-CEC Standby (0x36). The latter puts my TV and the Apple TV to sleep, and also gets heard by ESPHome (loopback) and results in the amplifier being powered off.
  • Exposes a service in Home Assistant allowing arbitrary HDMI-CEC commands to be sent.

The result is that when I press a button on the Apple TV remote to wake it up the amplifier powers on, the TV wakes up (as before), and all is ready to go with one button press. This satisfies my original goal, and also allows some lights to be turned on automatically.

I’ve still got some lingering architectural questions and may be digging further into the HDMI-CEC stuff to see if I can make it work better, but for now I’m happy. If/when I take this further, the big questions to answer are:

  • Currently ESPHome powers on the amplifier without Home Assistant. This feels rational for a device bridging the two protocols and makes the amplifier work more like a modern HDMI soundbar, but is it the best way to go? Running it all through HA would be a lot more complicated and network (and HA) dependent, but I could instead use the notification in HA trigger a Power On at the receiver. Are there ever situations where I’d want this device to not power on the amplifier?
  • The HDMI-CEC implementation is very simple, solely listening for two messages I saw the Apple TV send and taking action on them. One of these, Report Power Status, is per-spec used to send more than notifications of power being on. Should this be changed or further built out? (Note: Because the library doesn’t implement DDC for device discovery and addressing and such, it can’t be a full-fledged implementation. But that much is likely not needed; there’s more I can do.)
  • Is it possible to wake the Apple TV via HDMI-CEC? It’s not immediately obvious how, but perhaps with a bit of probing…?

Hardware-wise, this was simple to do. All it required was getting an HDMI connector (I used this one), connecting pin 13 (CEC) to a GPIO, pin 17 to ground, and pin 18 to 5v (VUSB) as per the readme at Palakis/esphome-native-hdmi-cec. Since CEC uses 3.3v there was no need for a level shifter as with Onkyo RI. I was able to add this on to the previous adapter without a problem and everything just worked.

With this ESPHome configuration I changed things around a bit, both to simplify and secure the device and make things better overall. As I learned more about ESPHome and started thinking about securing IoT devices, I wanted to minimize the ability to do OTA updates, including via the web UI, and access the API. I also wanted to pull credentials out of my .yaml file so I could more easily share it. Changes to support this, and some other nifty things, are:

  • Setting up a secrets.yaml to hold wifi_ssid, wifi_password, ota_password, and api_encryption_key.
    • Tip: All this involves is creating a secrets.yaml file in the same directory as the configuration .yaml and putting lines such as wifi_ssid: "IoT" or api_encryption_key: "YWwyaUNpc29vdGg3ZG9oazdvaGo2YWhtZWlOZ2llNGk=" in it. Then in the main .yaml reference this with ssid: !secret wifi_ssid or key: !secret api_encryption_key or so.
    • Generating an API key can easily be done with something like: echo -n `pwgen -n 32 1` | openssl base64
  • Setting a password for OTA updates.
    • Note: Once this password is set, changing it can be a bit complicated (see ESPHome OTA Updates for more information). I suggest picking one password from the get-go and sticking with that.
  • To further minimize unapproved access, I did not enable the fallback access point mode, the captive portal, and disabled the web server component (because it’s unauthenticated and allows firmware uploads). I’m still thinking about disabling safe mode.
  • Set name_add_mac_suffix: true to add the MAC address suffix to the device name. This makes it easier to use one config on multiple devices on the same network, such as when doing development work with multiple boards. (See Adding the MAC address as a suffix to the device name.)
  • Because my Onkyo RI PR has not been merged (as of 2024-Sep-01), I had been manually patching to add it. It turns out that some PRs can automatically be incorporated into the config via external_components, and this works great for my needs until this gets merged:
external_components:
  # Add the HDMI-CEC stuff for ESPHome
  - source: github://Palakis/esphome-hdmi-cec
  # Add PR7117, which is my changes to add Onkyo RI. Had not been merged as of 2024-Sep-01.
  - source: github://pr#7117
    components:
      - remote_base

Despite stripping the configuration back a bit to secure it better, which in turn removes on-device overhead, I still have problems with the OTA update on the Seeed Studio XIAO ESP32S3. This is irritating because it means any changes require connecting a cable to flash it via USB, but I can also keep using the breadboarded SparkFun ESP32 Thing Plus for any future development.

The configuration I’m using can be found here: hdmi-cec-onkyo-ri-bridge_2024-sep-02.yaml

Note that this includes some development HDMI-CEC buttons, such as sending EF:90:00 and EF:90:01. This is part of some experimenting in attempts to wake up the Apple TV via CEC, but thus far doesn’t do anything. However, they serve as good examples of how to send multiple bytes to the bus. It also includes commented sections for the different ESP32 boards I’ve used and will likely need to be changed for your purposes.

Update on November 2, 2024

After using this for a while I ran into a couple quirks, so I’ve some updates to both the device config and ensuring it builds under the current dev version (ESPHome 2024.11.0-dev, as of about 10am EDT on 2024-Nov-02). Unfortunately this hasn’t solved the problem of uploading a new version via OTA on the Seeed Studio XIAO ESP32S3.

The current version of the device config can be found here: onkyo-a-9050_seeed_xiao_esp32c3_v1.2.0.yaml

The main changes here are the ESPHome device no longer takes action based on the received HDMI-CEC commands (via Onkyo RI), and I cleaned up and clarified the events. There are three distinct events that can be acted upon:

  • HDMI-CEC: Report Power Status: On: Something reported its power status as On.
  • HDMI-CEC: Report Power Status: Standby: Something reported it’s power status as Standby.
  • HDMI-CEC: Standby Command: Something sent a Standby command.

I now use Home Assist to trigger on HDMI-CEC: Report Power Status: On and turn on some lights and press Onkyo RI: On button, turning the amplifier on. For shutting things down I trigger on HDMI-CEC: Report Power Status: Standby and turn the amplifier and lights off. This is more dependent on HA, but it also gives me more flexibility.

(I’ve not (yet) started looking into waking the Apple TV via HDMI-CEC.)

Comments closed