In our HA instance I have two helpers, each called Interior Lights, with one being a group of switches and the other a group of lights; both containing only things we’d consider interior lights. I then have an automation that turns both groups off, and trigger it from something like All Interior Lights Off which I’ll commonly trigger before going to bed, when leaving the house, etc.
Because a group entity can only hold the same type of entity, and we have lights that are both lights entities (i.e. bulbs) and switches entities (i.e. smart switches controlling dumb bulbs) we need one group for each type.
The specific problem was that two lights in the group group — one being an IKEA TRADFRI 800 lumen bulb and the other BTF-LIGHTING Zigbee single-color LED controller — started acting oddly. At first it was hard to tell what was going on, the IKEA bulb would seem to be on when not expected and the LED strip would be off when it should be on, but eventually I found repeatable cases:
When triggering All Interior Lights Off automation, if the IKEA bulb was already off, it’d turn on at minimum brightness.
After using All Interior Lights Off, the BTF-LIGHTING LEDs, on next on command, would flicker on and then almost immediately turn off.
It turns out the problem was having transition: 0 on the light group’s off automation. Back when doing tweaking for the Hue bulbs I changed this because otherwise these bulbs would dim out over 1-2 seconds instead of just turning off, and I didn’t like this. Unfortunately, this change exposed some bugs.
So I removed the transition from the automation and poof; no more weird problems.
Pick one of the files from the ZIP and let it load.
After it loads, go to Strava’s Global Heatmap, logging in if you need to. Then click the nine-box grid icon (same as the extension’s icon) that appears in the upper-right of the map.
Click Open in JOSM and the global heatmap will appear in JOSM.
To customize things a bit more — which helps quite a bit with visibility in JOSM — one can edit the map by picking a different activity and changing the gColor query in the address bar before opening in JOSM.
gColor options include hot, blue, purple, gray, andbluered. The activity can be changed via sport= and include the main Walk and Ride, and the lesser-used MoutainBikeRide, GravelRide, Snowshoe, etc.
But note that the extension doesn’t support all of these, so you may need to play with the URI in the new tab that opens to get things to display quite as you want. (I guess that’d be easy enough to change…)
UHMW PE replacement ring applied to a CS-M8100-12 cassette.
Stock Y0GX01500 on a CS-M8100-12 cassette.
Many Shimano cassettes, such as the CS-M8100 (XT, 12 speed) have a thin adhesive ring (part number Y0GX01500) on the back side, where it sits against the Microspline freehub body.
Unfortunately, these can easily be lost as they tend to stay on the freehub body when removing the cassette. Which is exactly what happened when I sent the NOBL wheels from my Mach 4 SL‘s in for a warranty rim replacement. Some folks advocate for removing them, believing them to cause cassette wobble, but the main purpose seems to be eliminating noise and fretting between the cassette and freehub bodies.
Since I don’t like bike noises, I wanted another. They can be bought online for something like $9/ea before shipping, but that seems like a lot… So a better solution? Make one!
37mm x 33mm ring cut from UHMW PE on a Cricut.
Measuring a new ring on a spare cassette showed it to be 37mm OD x 33m ID, roughly 0.2mm thick. I have some 0.0115″ / ~0.29mm (Ultra High Molecular Weight Polyethylene (UHMW PE) tape from McMaster-Carr (part 76445A764) that I use for rub on bike frames, so that seems perfect. Kristen cut a ring out with her Cricut (with a Deep Point Blade, set to “thin cardboard”), I stuck it to the cassette, and that was that. Much better than spending $9 and waiting for it to arrive.
I had originally tried to print one with PETG filament, but when the first of two broke coming off the build plate I figured it probably wasn’t the right material and would come apart under load, leading to a loose cassette, noise, etc. UHMW PE tape is very malleable and often used to stop noise between rubbing parts, so it seemed like the better choice.
In continuing to optimize things I wanted a bulb that has an even-dimmer and warmer initial brightness than the IKEA TRADFRI LED2101G4 (in an E26 to E12 adapter) I’ve been using. I’ve now settled on Philips Hue White and Ambience 60W A19, as it’s both lower brightness and warmer at initial turn-on and has a brighter high end, making it more usable when working on things around the bedroom.
With the Lighten Up! I had used a halogen bulb, which combined with the dimmer, made the initial brightness so low the filament was barely visible with the naked eye. This made the initial-on not noticeable and didn’t jar me awake. To try and replicate something similar I considered the Shelly Dimmer 2 and putting a halogen bulb back in place, but I wasn’t really wanting to go back to bulbs that give off so much heat and use so much power. And while I find Shelly devices well engineered, I wasn’t very interested in more WiFi IoT devices. (I really prefer Zigbee or Z-Wave for security reasons.)
Thanks to this /r/homeassistant thread I was prompted to try some Hue bulbs, so $76.31 to Amazon later and I had a pair. They easily adopted directly into HA and after a little tweaking (mostly adjusting automations for the new devices), I’m happy with them. The warm/low setting is really quiet dim and yellow-reddish, and at full brightness it’s… nicely bright.
I may tweak the curve used for bringing the brightness up, but thankfully the script I use (Ashley’s Light Fader 2.01) has a whole range of curves available. I’m currently using the default easeInSine, but this morning it seemed to hit the final brightness a bit abruptly, so I may try something like easeInOutSine.
SwitchBot sensor showing well-below-0°F reading and still 100% battery after a month of winter.
At ~US$31 for a three pack (via Amazon) they are 1/3 the price of the ZSE44, take AAA batteries, and are IP65 rated. The specs also claim they work down to -40°C (-40°F) with Lithium batteries. Basically perfect for outdoor spaces including attics, crawl spaces, etc.
I installed this side by side with the ZSE44 with the solar radiation shield on the back fence, and as hoped, it’s reading well below zero and working fine. I also put the other two (from the three pack) in the fridge and freezer to see how they’d do there, and while the freezer doesn’t get as cold as it currently is outside, it was a good preview of data before the temperatures dropped. And all three currently are at 100% battery.
Now that we’ve had our first well below 0°F temperatures of the season I can say that yes, the SwitchBot sensor is working properly, with more frequent updates.
When initially setting up Home Assistant its purpose was to log temperature and humidity at various points around the house. I started with the cheapest sensors available at the time — Aquara Temperature and Humidity Sensor — but after a couple years have passed I’m finding these a bit disappointing. The CR2032 battery life isn’t great (even indoors they last about 8 months), and I’ve had a few of them just die. While they are small, the size benefits don’t outweigh the battery and longevity hassles. The Zigbee connectivity is pretty simple and mostly works, but when the battery or device dies it just kinda… falls off the network and works/rejoins (even after a battery swap) unreliably. I think I’ve disposed of three in the last year.
The biggest downside to these SwitchBot sensors is they use Bluetooth Low Energy (BTLE) for communication. This does not have nearly the range of Z-Wave, which was my original reason for putting the ZSE44 in the back yard.
Thankfully Home Assistant can use Bluetooth Proxies (networked remote BT sensors), and the Shelly 1 — a UL-listed WiFi-controlled smart relay — is one. I already had a few of these around the house to control lights fixtures, so via the proxies I’m able to get enough BTLE coverage to pick up the sensor along the back fence and the ones in the fridge. It’s no Zigbee or Z-Wave or Thread-like self-healing mesh, but so far it’s working well. And really, with the devices’ fixed locations, there’s not a ton of practical difference between setting up a mesh network with well-planned routers (Zigbee) or repeaters (Z-Wave) and deploying BTLE proxies.
I’ve also picked up two of the SwitchBot Meter Plus devices which is a temperature/humidity sensor with an LCD display that runs off of two AAA batteries. It’s not as robust as the Indoor/Outdoor sensors, but is perfect for somewhere I want to see the local temperature visually and log it in Home Assistant; indoor uses. In years past I’d place temperature/humidity displays like this around the house so I could see some data, and these are basically the same, except with logging to Home Assistant.
Long-term, as they fail, I could see myself replacing the remaining Aquara sensors around the house with these. Even the couple of ZSE44 sensors I have may get replaced with these (particularly the one in the back yard). But, for now, I’m just glad to know how far below 0° it really is, and have record of this, because data is nifty.
Hoover CleanSlate sprayer being primed after it’d dried out.
We have a Hoover CleanSlate portable vacuum thing and it’s incredibly useful for cleaning up small spills / stains / cat vomit / etc. This morning when I wanted to use it the sprayer was no longer working.
It had worked last time, and I’m judicious about letting it dry out between uses (because I don’t like mold), and it turned out this dried out the pump which in turn meant it needed some time to self-prime before it’d spray.
The solution was simple: put a releasable cable tie around the sprayer handle, put that in the laundry tub, and let the unit run for a few minutes. After this it was spraying fine and all was good. (Yes, you could hold down the trigger, but I’m lazy. And I wanted to make a cup of tea.)
(This is part of my neo-Luddite series where I document things in writing. Because I find a watching a multi-minute YouTube video to access info that can be acquired via a few paragraphs of text to be maddening.)
Ever since getting my new monitor (a Dell U3225QE — a nice IPS LCD after some OLED issues) I’ve been having problems with it not going to sleep. But that’s not usually a monitor problem, especially as I could manually put it to sleep… So what’s keeping macOS from putting it to sleep?
Well, thankfully with pmset one can see what’s going on:
c0nsumer@mini ~ % pmset -g
System-wide power settings:
Currently in use:
standby 0
Sleep On Power Button 1
autorestart 0
powernap 1
networkoversleep 0
disksleep 10
sleep 0 (sleep prevented by backupd-helper, powerd, backupd, coreaudiod, coreaudiod)
ttyskeepawake 1
displaysleep 10 (display sleep prevented by CEPHtmlEngine)
tcpkeepalive 1
powermode 0
womp 1
c0nsumer@mini ~ %
There we go, seems CEPHtmlEngine is preventing the display from sleeping. So what is it?
Really? Illustrator? Huh… I have been working on a new map of Bloomer Park (in anticipation of the forthcoming Clinton River Oaks Park changes) for CRAMBA and leaving it open in the background… I guess that’s it.
And strangely, closing and re-launching Illustrator made the assertion go away. And now the problem is gone.
Oh, Adobe…
At least it’s easy to tell why it was happening.
(This is Adobe Illustrator v29.8.4 on macOS Sequoia 15.7.3.)
When I switched from an iMac to a Mac mini in late 2024 I choose an ASUS ProArt 5K PA27JCV (24″, 60 Hz) for the monitor and while it looked great, it died after 14 months, seemingly with a backlight or power supply problem. ASUS’ warranty support requires shipping the monitor back, potentially waiting 3-4 weeks, and then getting a replacement. And worse, the replacement could have dead pixels, as the ASUS warranty doesn’t consider ≤5 dark pixels a problem.
The old HP ZR2440w that I swapped in as a spare wasn’t cutting it, so with an indeterminate wait ahead of me, potentially receiving something with bad pixels, and my being vaguely interested in something larger and with a faster refresh rate I went looking at new monitors.
Coming to the realization that 4K is probably fine I picked up a Dell 32 Plus 4K QD-OLED Monitor – S3225QC from Costco for $499. It was well reviewed online and looked pretty good when I played with one for about 20 minutes at Micro Center. When I got home and sat in front of it doing my normal things it looked a bit… different… almost as if my glasses weren’t working quite right. But I figured new monitor tech just needed some time for me to get accustomed to. After all, it had a very high contrast ratio and sharp pixels; maybe it’s just that?
After a few days it still didn’t feel right, so I began looking for a solution. Costco has a 90-day return window for computer monitors, so I had some time, but this didn’t look good; I wanted an answer soon.
I was fortunate to be able to borrow a Dell UltraSharp 32 4K USB-C Hub Monitor U3223QE for the weekend, which was perfect as being a being a high end display with the same resolution and panel size as the S3225QC I could compare them side by side. And in the end the LCD just looked better.
Dell S3225QC QD-OLEDDell U3223QE LCD
I took some macro photos of both displays and it turns out that what was bothering me was fringing, a problem common to OLEDs. It was hard to point out during normal use other than text-is-a-bit-blurry-and-weird , or like an oversharpened image, or almost like artifacted text in a JPEG image, but with photos it was much easier to see what’s going on. And better, the cause: the arrangement of the subpixels; the little red/blue/green dots that make up a pixel.
As shown above, the subpixles in the Dell S3225QC QD-OLED form a square with green on the top, a larger red pixel in the lower left, and smaller blue in the lower right. The Dell U3223QE, a typical LCD, has three vertical stripes making a square. The result being that high contrast edges look very different on an OLED, often with a strong off-color border — or fringe — along horizontal and vertical lines.
In the photos above, note the vertical part of the 1 which has red and green dots along its right side, and large red dots along the top of the 6 with green along the bottom. These are the strongly colored fringes. (On the LCD they appear white as the three equal size subpixels pixels act equally.)
This meant that things that I tend to do, text or fine lines in maps or CAD-type drawing, are not right at all on the pixel pattern found in this OLED panel. Beyond the pixel pattern, I also suspect that the much crisper pixels (defined points of light) contribute to the fringing having an artifacting-like effect.
This was much more pronounced when looking at light text on a dark background; the way that I read most websites. Visual Studio Code does a wonderful job demonstrating this problem:
Dell S3225QC QD-OLEDDell U3223QE LCD
This gets at why OLEDs make great TVs and gaming monitors. The contrast is outstanding, color is excellent, and high refresh rates are ideal for moving images and fast-response games. And there’s no noticeable fringing because edges are constantly moving across pixels; almost nothing is still. They also work great on small devices like phones where the pixel density is so high that fringing is too small to see.
But on desktop monitors for still things — text and fine lines — OLEDs currently just aren’t great; I guess that’s why office and productivity type monitors are still LCDs. Even though I don’t like being that person who returns computer stuff just because they don’t like it, I ended up returning the monitor after only four days of using it. The S3225QC and it’s QD-OLED just doesn’t work for me; it made my eyes feel funny to use.
Whether this’ll be buying my own U3223QE, perhaps a Dell U3225QE (adds 120 Hz scanning, an ambient light sensor, and a Thunderbolt dock), or just waiting for an ASUS PA27JCV to come back, I’m not sure… But whatever I end up using will, for now, will be an LCD, not an OLED.
Last two years of my phone’s location, as gathered by Home Assistant.
Part of our Home Assistant (HA) setup uses the Companion Mobile App for easy remote control and to collect data from our devices. The main tracked item is the phone’s location, so HA can tell if we’re home or not, and currently I only use it to change how some lighting automations work.
I also have HA set up to log all device state data (switches, outlets, climate sensors, power consumption) to a local instance of InfluxDB, and then have Grafana installed so I can visualize this data.
Mains Voltage via Home Assistant from February 2024 through December 2025.
My original use for this was long-term logging of temperature and humidity sensor data — which is neat to see — but as I’ve experimented with graphing things like mains voltage. This was neat because it made it easy to see things like how voltage drops and becomes erratic during summertime cooling periods. And showed that grid voltage jumped up by ~2VAC in March 2025, around which time I recall DTE doing utility work on the grid just north of our house. (Yes, evidence of them improving things locally.)
Late on Christmas evening, wanting some time to just sit alone and do things, I put together a map showing where my phone had been. I’m pretty happy with how it came out, as I can now input a time range and dots will appear for each logged location, color-coded with geopositioning accuracy (brighter green is more accurate).
I also used this as another exercise in working with LLM tools like ChatGPT. I’m (finally?) realizing how useful this can be when thought of as a modern search engine. There’s still constant reminders of how imperfect and problematic results can be, but with a domain background it’s helpful. I find that thinking of these tools as tireless (yet emotionless) junior employee who makes lots of mistakes and needs all responses tested and vetted works… decently… in pointing me in a decent direction.
But I digress… Here’s the query that’s the main point of this and makes it all go:
SELECT "latitude", "longitude","gps_accuracy"
FROM "homeassistant"."autogen"."state"
WHERE "entity_id" = 'pixel_8'
AND $timeFilter
AND "gps_accuracy" < 100
Last 30 days of phone location data.
It was then simply a matter of putting this into a Geomap that displays a point for each location, and colors it based on gps_accuracy state and looks decent. I was even able to place it all on the Thunderforest Landscape map tiles which shows OSM-mapped trails and has been oh-so-useful on my RAMBA Trails Map.
Initially I looked at a heat map, but it didn’t seem as useful as individual points. I may explore this later, but the device where I’m currently running HA is a bit under-powered for this. And note that the query above excludes records that have a GNSS accuracy worse than 100 meters as this generally means that GPS (et al) wasn’t working at all and geopositioning likely came from local mobile towers (which shows me as being on tall local buildings, in fields I’d never visit, etc).
While obvious in retrospect, the most notable things this shows me is that when I’m driving — typically running OsmAnd+ or Google Maps (or both) — the recorded points are high accuracy and frequent. When riding my bike, carrying my phone idly in a pocket, the GNSS sensor is likely PRIORITY_PASSIVE so the dots are both infrequent and low accuracy.
I’m curious to see what I can further tease out of the logged data. The HA Companion mobile app can get all sorts of interesting info via its Sensors. For example, the Activity Sensors on iOS automatically detect:
Stationary
Walking
Running
Automotive
Cycling
And on Android:
in_vehicle
on_bicycle
on_foot
running
still
tilting
walking
Plus there’s things like what’s being done with the device, what’s seen about its environment (including visible wireless networks, Bluetooth devices), etc…
It might be neat to see what more I can get out of this. Or it might just end up as a nudge to decrease what HA is collecting (and possibly purge some of it from the db).
Of course, it pales in comparison to what the telcos, device manufacturers, OS vendors, and app vendors can do with their data engineers, massive troves of data and ability to cross-reference, etc. (A bit of a reminder that phones are just behavior-trackers that also make calls and take pictures…)
I hope to soon try migrating this HA instance from a Raspberry Pi 4B to a higher-powered slim PC. While I don’t intend to take this much further, it will provide more power for chewing on data like this and will hopefully let me figure out a disaster recovery plan for HA that includes preserving all logged data. When first setting up this map I tried to draw both a location and heatmap and this was a little too much for the Pi and as it ground to a halt Kristen noticed that the back yard lights weren’t turning on properly. Doh! Or I guess I could just do the processing on another machine…
Earlier this year I designed a very basic box/organizer for AA and AAA batteries in Autodesk Fusion, making it parameterized so that by changing a few variables one could adjust the battery type/size, rows/columns, etc. This worked well, and after uploading it to Printables earlier today I realized that reimplementing it would probably be a good way to learn the basics of OpenSCAD.
OpenSCAD is a rather different type of CAD tool, one in which you write code to generate objects. Because my battery holder is very simple (just a box with a pattern of cutouts) and uses input parameters, I figured it’d be a good intro to a new language / tool. And in the future might even be better than firing up Fusion for such simple designs.
Slicer showing the Fusion model on top and OpenSCAD on bottom.
By changing just a few variables — numRows and numColumns and batteryType — one can render a customized battery holder which can then be plopped into a slicer and printed. No heavy/expensive CAD software needed and the output is effectively the same.
Without comments or informative output, this is the meat of the code:
Simply, it draws a box and cuts out the holes. (The first cube() draws the main box, then difference() subtracts the battery holes via the second cube() as their quantity and location (via translate()) is iterated.
That’s it. Pretty neat, eh?
(One part that confused me is how I needed to use let() to define startColumn and startRow inside the loop. I don’t understand this…)
While this probably won’t be very helpful for more complicated designs, I can see this being super useful for bearing drifts, spacers, and other similar simple (yet incredibly useful in real life) geometric shapes.