Years ago Mark Ferlatte told me about the AIDS/LifeCycle (Wikipedia) charity ride from San Francisco to Los Angeles to raise money for the San Francisco AIDS Foundation and the Los Angeles LGBT Center. Money raised by this event allows the two non-profits to provide free HIV/AIDS medical care, testing, and prevention services. Each rider in the event raises at least $3000, with the event raising over $220,000,000 since it began in 1993.
At 545 miles over seven days, riding everywhere from the Pacific Coast Highway to proper California mountains, this sounded like it a wonderfully fun way to see a new part of the country, really enjoy being outside, and most importantly raise money to directly help save lives. But, it always seemed like such a logistical challenge that fell into the back of my mind as a one-day-in-the-future goal.
Well, here we are, and 2022 is the year!
This year Kristen and I are both signed up for the 2022 AIDS/LifeCycle ride! On June 5th we will set out from Cow Palace in San Francisco to spend the next week pedaling through one of the most beautiful parts of the country with a couple thousand other folks, all of us with one goal: helping those with HIV/AIDS.
This is where we need your help. Specifically, we need money.
Kristen and I each are working to raise least $3000 through donations from people like you. We all give money to trails, trail organizations, and other groups which help make our lives more fun and enjoyable. Please join us in also giving a bit more to save lives, so everyone can have a great life.
Please click one of these links to donate: Steve • Kristen
(Donations are to a non-profit, and thus tax deductible.)
Thank you for your help, and for making the lives of those with HIV/AIDS better!
There are many times when one wants to see which process is responsible for network traffic on a endpoint. While there are ways to look at which process has an open socket or whatnot, this doesn’t help with UDP, and it’s often quite useful to simply do a recording then later see which process is responsible for creating a packet.
On Windows this is very easy using Pktmon / netsh trace / Network Monitor to do a capture, but on macOS it’s not that straightforward.
Using tcpdump with the -k argument one can print process name and PID to the console. In this example I’m filtering on just the host 8.8.8.8 (Google Public DNS) then running dig in another window to look up dingleberrypie.com:
c0nsumer@myopia ~ % sudo tcpdump -k -i en0 host 8.8.8.8 Password: tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on en0, link-type EN10MB (Ethernet), capture size 262144 bytes 09:17:58.210959 pid dig.76987 svc BE IP myopia.home.nuxx.net.56248 > dns.google.domain: 10294+ [1au] A? dingleberrypie.com. (47) 09:17:58.292541 IP dns.google.domain > myopia.home.nuxx.net.56248: 10294 1/0/1 A 96.126.107.52 (63)
The bold line is the packet going out to 8.8.8.8, and the pid dig.76987 portion shows that it’s from a dig process, which had process ID 76987.
For now it seems the best way to do this is to record the capture to a file, then feed it back into tcpdump. Recording this requires using the pktap pseudo interface (see the tcpdump man page about the --interface argument) to ensure this data is saved into the file. The same capture above, writing to a file called out.cap, would be as follows:
sudo tcpdump -i pktap,en0 host 8.8.8.8 -w out.cap
This can then be fed back into tcpdump for parsing/filtering/viewing:
c0nsumer@myopia ~ % tcpdump -k -r out.cap reading from PCAP-NG file out.cap 09:27:10.964758 (en0, proc dig:77619, svc BE, out, so) IP myopia.home.nuxx.net.52983 > dns.google.domain: 56446+ [1au] A? dingleberrypie.com. (47) 09:27:11.018809 (en0, proc dig:77619, svc BE, in, so) IP dns.google.domain > myopia.home.nuxx.net.52983: 56446 1/0/1 A 96.126.107.52 (63) c0nsumer@myopia ~ %
In this case it shows both the sending process and the process which received the packet.
When using this for more detailed analysis, I’ll use the macOS tcpdump to grab a very broad capture and then do first-pass filtering before bringing it into Wireshark for more detailed analysis. See the PACKET METADATA FILTER section of the tcpdump man page for details on how to filter on a PID, process name, etc. From the header of this section:
Use packet metadata filter expression to match packets against descriptive information about the packet: interface, process, service type or direction.
Kristen and I have been spending a good deal of time in the Ishpeming and Negaunee area this year, and I’ve made it a personal goal to become more familiar the local trails — both RAMBA-supported and otherwise — and get them documented OpenStreetMap (OSM). Having these trails in OSM provides two big benefits: they appear in other mapping tools (such as OsmAnd, GaiaGPS, Strava, MapMyRide/MapMyRun) and the trail data can be freely used to build other tools.
Over the years of making trail maps with OpenStreetMap I’ve mostly produced PDFs for printing, leaving mobile and online mapping to other apps. These work well, but have the big downside of rendering routes with their style. That is, online maps via these tools’ show the routes, but look quite different from print maps, even if all the data for them to display more data (such as colour=* tags on relations) is in OSM.
While these apps work pretty well, and I use them myself routinely for navigation, I got the itch to see if I could make a web-based map that looked more like locally produced print maps than app-based renderings. It seemed like a good project, a good way to learn some basics of modern web development, and maybe make something useful.
What I ended up with was a slippy map of showing the RAMBA trails that uses layers of pre-rendered tiles to show the different official trail routes, placed over a background map. The map viewer is client-side JavaScript that loads static tiles from a basic web server, making this a very simple app to host (just a bunch of static files on a site).
In this post I intend to document the major steps of how I made this map, why I used the tools I did, and share the code to reproduce (and update) this build. Hopefully this’ll allow others to get their head around these map presentation basics, perhaps even reusing this work to make and host another map.
Update OSM Data
Mostly outside the scope of this article but worth a mention, a significant amount of time was spent ensuring that the RAMBA area trails are accurately listed in OSM. Without good data it would not be possible to go further, as the OSM data is the base data used to create other maps.
By combining information from a bunch of sources, and doing some personal surveying of trails while riding and hiking, I was able to get all of the official RAMBA trails documented, along with numerous other paths and tracks in the area. This building a complete picture of the usable trails in the area.
Information used to get the RAMBA trails in OSM included:
Hand-annotated map from Danny Hill listing local trail names.
These sources were combined in JOSM, cross-referenced, ways drawn and tagged, relations built out, and before long a complete picture of the RAMBA-area trails — official and otherwise — were in OpenStreetMap.
Most importantly, beyond documenting the trail locations, trails were grouped into route relations to show each official route, and then all the official routes were grouped into a superroute for all the RAMBA trails. As of time of writing, relation RAMBA Trails (12425503) is the superroute that aggregates the individual trail routes such as Epic Loop (8467869) and Malton Loop (8468010).
The result of this is accurate trail data that’s easy to query for and style using other tools.
Rendering Tiles with Maperitive
There are myriad ways to render tiles from OSM data, with most of these involving setting up a database server and a toolchain which’ll generate, cache, and serve tiles on demand. For most large data sets this makes a lot of sense, but for a small trail system I really wanted to use static tiles I could serve from a simple webserver.
Eventually I came across Maperitive, a desktop application for Windows that takes GIS data (including OSM), stylizes it with a relatively simple ruleset, and can generate tiles in the standard XYZ format for use elsewhere. It can also be scripted, which meant I could use it as part of an automated workflow to generate new tiles as the OSM data changes. This seemed like a good solution, so I set about writing some rulesets that would reasonably show the RAMBA trail routes and some automation around it all.
After a lot of experimenting I settled on a having separate tile set for each of the official loops, an overview of all trails, and a base map. The base map would always be shown, and a user can toggle between layers which highlight all the trails or individual loops.
After a few iterations of custom rules, I settled on a simplified set based on the Default.mrules file which comes with Maperitive for rendering the base map. The only modification was changing the font to Michael Adams’ Roadgeek 2005 Transport Medium font, as it looks nicer than the default, Verdana. For the overview and route layers I created simple rules based on the the default rendering of highway=path, using the Heavy version of the font. The rule for each trail route (relation) selects the trails in a given relation then colors them accordingly.
Creating these rules took a bit of fiddling, as Maperitive is both a bit of a dead project, not completely documented, and (in the latest Beta) sort-of buggy where sometimes the map display would stop updating. Still, even though I’m not great at making attractive things, I was able to come up with something that worked well enough.
Conveniently, Maperitive also comes with a command line version (Maperitive.Console.exe). After settling on rendering rules and a tile generation script, I used this as part of an automated workflow which downloaded OSM data directly then rendered each of the tile sets.
After tile generation I used a Windows binary of OptiPNG to losslessly compress the tiles, resulting in a ~62% space savings (original: 746MB, optimized: 286MB) which’ll reduce storage and bandwidth overhead.
The Front End
With tiles generated I needed a way to display them. It turns out that OpenLayers was easy to use and it all ran as simple client side application in a browser. By using npm and parcel, with Visual Studio Code for editing, it was quite easy to get the site developed, tested, and bundled up for deployment. The only component I had to add was ol-layerswitcher control, which provides an easy way to toggle between layers.
Prior to this I had very little experience with modern web development, with my exposure to JavaScript pretty much limited to reading others’ code to figure out what it’s doing. After a bit of confusion (and having to accept the hidden complexity of using an application bundler), I was able to focus solely on writing a single main.js file with a basic index.html that together do what I wanted:
Run full screen by default.
Show all trails by default, with toggles for the defined routes (layers of the map).
Show an attractive background map below the routes to show the rest of the area.
Offer controls to use geolocation to showing one’s location on the map and reset the view to the original map extents.
Look sane on desktop and mobile devices.
This ended up being much easier than I thought, and between the OpenLayers Examples and just some basic programming I was able to get something I’m happy with. Far more time was spent designing the tiles and thinking about what I wanted it to do than writing the code to display it all.
Tile Hosting
The actual map tiles are a number of small PNG files, and a typical session of viewing and panning around the map can result in hundreds of image loads. This was seeming a bit slow when being served from nuxx.net via HTTP/1.1, so I looked into using HTTP/2 to improve performance.
Unfortunately, it was not simple to turn on HTTP/2 here at nuxx.net as I’m using PHP for WordPress, which in turn requires MPM prefork, which precludes mod_http2. I could have set up another web server and such, but for now I’m hosting the tiles in AWS, with the tiles uploaded to an S3 bucket and served via CloudFront.
This should allow for better tile download performance than what I can do from my server. Despite potentially incurring a bit of a financial cost it is a good experiment in hosting tiles in the cloud. I may change this in the future, particularly if it becomes cost prohibitive, but for now it’s working well.
Follow Along At Home
If you would like to generate this same map, start by downloading this ZIP file: ramba_trails_map_code_1.0.zip. It contains the scripts and rules needed to generate the map tiles (ramba.mscript and the .mrules files), the index.html, main.js, and package.json for the OpenLayers-based front end, the .osm file used to generate the first release of the map, and a few batch files that tie it all together.
These batch files are included to will help you out, but may need some editing to fit on your environment:
fetch_osm.bat: Uses curl to download all OSM data within a bounding box that encompasses the Ishpeming/Negaunee area.
generate_tiles.bat: Runs ramba.mscript via Maperitive.Console.exe to generate the tiles.
optimize_tiles.bat: Copies the unoptimized tiles from the .../tile_output/raw output directory to the .../tile_output/optimized directory, then runs OptiPNG against the tiles to optimize them in place.
To build the web app you’ll need to install npm, parcel, create a new OpenLayers app as per the directions here. Then install ol-layerswitcher (npm install ol-layerswitcher), replace the default index.html, main.js, and package.json with the ones I provided, and you should be ready to go.
Updating the Map
As you can see, the map is two major pieces: the front end and the tiles. Whenever the map data changes in OSM the tiles can be regenerated to update those layers. The code for the front end web app only needs to change if the storage location changes, features are going to be added, etc.
Conclusion
This map has worked out rather well and I’m happy calling it v1.0. It’s been a great learning experience, and I’ve even managed to produce something useful that didn’t exist before: an interactive map of some of the most rugged single track trails in Michigan; one of my favorite places to ride mountain bikes.
It’s far from perfect, and there are some things I could do differently, but for now, I’m considering it a success. When in Negaunee for vacation last week I successfully used development versions of this map to find my way around, so I know it’s better than nothing.
If you find any quirks in the map data — such as trails with wrong names or in the wrong location — please take a screenshot and show me what’s wrong and email that to steve@nuxx.net. I’ve done my best to ensure the RAMBA trails are accurately mapped, but I’ve certainly missed some things.
Problems
No key or other ancillary information (such as logos) as are normally found on print maps.
No terrain. While 1m DEM elevation data is available from the USGS, I couldn’t figure out how to use it in Maperitive for generating hillshading.
No easy way to add clickable items to show additional info, link to external map apps (eg: for navigation).
Maperitive’s text rendering isn’t the best, resulting in goofy looking text at some zoom levels.
Long trails only have one label placed on the middle. Trails with one name broken into multiple ways will be labeled numerous times.
Due to being run in a browser it’s a sufficient, but not great, mobile experience. Specifically, selecting the geolocation, recenter, and layer controls can be fiddly because they are so small.
Does not work offline, but thankfully most of the RAMBA area now has good mobile data coverage.
Things To Investigate
Keep an eye on AWS cost and performance.
Look at Leaflet for the front end, as it seems a bit more modern.
Consider rendering map tiles with TileMill. This will add a lot of complexity both in setup and styling tiles, but once done should allow a lot more flexibility in styling and overcome most of Maperitive’s problems. mapbox/mbutil should work for getting XYZ PNGs out of MBTiles files.
Consider using a tile server if I don’t want to deal with discrete files.
Look more into using vector tiles with client-side styling. (I passed on this for now, as a GeoJSON file showing each of the route is a large download and had no benefit over raster tiles.)
Maperitive should run under Mono, and OptiPNG is available for many platforms, meaning it should be possible to reproduce this build under macOS or Linux. Note that the GUI for Maperitive will not currently run on macOS due to Windows.Forms currently being based on Carbon, which is not available for 64-bit macOS. So while the CLI should work, the GUI version isn’t currently compatible with macOS 11.5 (Big Sur) and higher.
For years the Pivot Trail 429 series of bikes have been a sort-of Holy Grail bike for me. The ultimate aggressive cross country / trail mountain bike, and something I really wanted to try. In August of 2020 I was able to spend a few hours riding v2 of the bike around some of my favorite Marquette and RAMBA trails and fell in love. Something about the bike and I clicked, and I came away wanting one. After that trip I sold my beloved Specialized Camber and got ready to buy a new bike.
With the COVID-19 related bike industry shortages it took a lot longer than I’d hoped, but almost a year after that demo — in August of 2021 — I made a quick trip up to Bellaire (three hours each way) and picked my new bike from Patrick at Paddles & Pedals: a Pivot Trail 429, v3, Race XT build, with the crank and wheels upgraded to high end carbon bits.
While I hadn’t ridden this new v3 of the Trail 429, and it’s a much longer reach bike than v2, I’d stared at geometry numbers for hours, comparing it to my current bikes, and figured that a size large in this model would also be right for me. After getting the bike and swapping the usual contact points, fitting the larger rotors that I wanted, and some other little bits, it was all ready to ride.
Using the Low bottom bracket setting (the higher of the two), a 35mm stem, an upper stack height of 30mm (headset upper cover + 15mm spacer) and the 11° sweep Salsa Salt Flat Carbon bar the RAD ended up just 5mm shorter of the Timberjack and feels pretty good on its first ride.
I may experiment with the Lower setting, which’d slacken the head tube and seat tube angle by 0.5°, bring the reach in and increase the stack, but this’ll likely require a 50mm stem to get the fit where I want it. At the same time, it’d bring the bottom bracket height closer to that of the Camber’s, which might be really nice. Between suspension setup and such, I’ve got a lot of experimenting to do.
About a week back I did a round of updates at home, including updating the Pi-hole container (running in Docker on a Synology DS1019+) to the latest version, v4.2.2. Not long after this I noticed that backups to Backblaze, via Arq running on my main Mac, were stuck with a Caching existing backup metadata (this may take a while) message.
Since it said it might take a while I gave it a few days, but after a week it was likely something was wrong. Turns out it wasn’t caused by any of my updates, but instead by two versions of the the block list HOSTS (v3.5.3 and v3.6.0) — the default block list in Pi-hole — in turn caused by the Polish block list KAD.
How’d I figure it out? Here goes:
First, a wee bit of digging led to this Reddit thread on /r/Arqbackup, and a quick look at Pi-hole showed that yes, f000.backblazeb2.com is being blocked over and over.
Whitelisting this site allowed backups resume working. But… why?
I then disabled the whitelist entry and updated gravity in Pi-hole (pulling down and compiling a new copy of the blocklists) and everything kept working. So, this seems like a block list might have been the source of the problem.
I only use two block lists, one the Pi-hole default and the other from the COVID-19 Cyber Thread Coalition. Taking a quick look through the current versions (1 · 2) didn’t show anything blocking this site as of this morning, which seemed rational as the blocklist update fixed things. Local DNS for this client is via Pi-hole, which in turn points to my firewall, which is running Unbound to handle all resolution itself. So, it shouldn’t have been caused by a DNS provider blocking things.
Pi-hole automatically updates gravity every Sunday early in the morning, which would about correlate with the Arq problems starting. So maybe this is it? With the last Gravity updates happening on 2021-Apr-04 and 2021-Mar-28 we’ve got a window to look for f000.backblazeb2.com in blocklists.
The COVID-19 Cyber Threat Coalition domain blocklist was updated this morning, and doesn’t have any obvious version control, so I skipped over this one for now. The second, the Pi-hole default HOSTS, is hosted in GitHub and has regular releases. So let’s look through there…
Grabbing the last four, v3.5.2, v3.5.3, v3.6.0, and v3.6.1 spanned the last 18 days, which should cover the window during which this broke. A quick unzip and grep showed f000.backblazeb2.com and www.f000.backblazeb2.com in the fakenews, gambling + social, gambling + porn, and social categories in versions 3.5.3 and 3.6.0, but not anything before nor after.
There we go; the reason for the block and it’s all within the observed timeframe. This isn’t a hostname one would normally want to block, as it’s part of BackBlaze’s CDN (PDF). Sounds like an overzealous addition to a blocklist got sucked up into the HOSTS list.
Looking further through the grep output, this was part of the .../KADhosts/hosts file from the KAD list. It turns out that f000.backblazeb2.com was added to the KAD list on 2021-Mar-26 and then removed on 2021-Apr-01. HOSTS pulled from KAD for v3.5.3 on 2021-Mar-28 and v3.6.0 2021-Mar-31, which caused it to inherit the block in those versions.
Quite an interesting chain, eh? A Polish ad blocking group makes a change that ends up in the default list for one of the most common DIY adblockers, which in turn breaks access to a fairly common CDN, in turn breaking data backups. It’s dependencies all the way down…
It’s now fixed, and everything would have resolved itself had I waited until Sunday, but at least now I know why.
Both the Electric Queen and Timberjack were fitted with the same Industry Nine Trail S Hydra 28H wheelset; a really nice value wheelset which mates the amazing Hydra hubs with aluminum rims. Despite slightly denting (and fixing) the rear rim, these have held up great and been wonderful to ride, but I still occasionally found myself missing the stiffness (and durability) of carbon rims.
As the bike sat over winter I figured it’d be a good time to upgrade to the carbon rims, so just before Thanksgiving when Light Bicycle was offering a bit of a sale I ordered a set of rims and got the process started. Between these value rims and (literally) slow-boat-from-China [1] shipping, eBay special spokes, and spare nipples from previous builds I was able to put together a nice, solid, carbon wheelset for about $550 less than if I’d bought a complete similar set from I9. And I’ll have some rims to sell (or reuse).
The Trail S Hydra rims come with straightpull hubs that I9 doesn’t sell separately, but they were nice enough to send me the specifications for them. With some forward/backward checking against the original rims and spokes (597mm ERD, 303mm spokes) I found the DT Swiss Spoke Calculator to work great for these hubs as well.
For rims I chose the Light Bicycle Recon Pro AM930 rim, which is their high end 30mm internal 29er rim with a nude unidirectional carbon finish. As options I chose 28h drilling, black logos, and black valve stems to match the hubs and any bike. (Silver logos would also have been fine to match the hub logos, but I really prefer plain looking rims.)
When shopping around for spokes a deal popped up on eBay offering a whole box of 298mm DT Swiss Competition straightpull spokes, which perfectly match Squorx nipples left over from previous wheel builds. I love working with nipples like these, because they are tightened with a T-handle tool from the back side, which makes building way more comfortable and faster than with a traditional spoke wrench. And it means no chance to mar the anodizing on the nipples.
The wheels were built up using Ultra Tef-Gel as thread prep, to a maximum tension of ~131kgf. Before starting the build I hadn’t realized that the inner and outer spokes on each side of the rim would be a different tension. As their flange offset is a bit different for each set of spokes on each side, necessary so the straight spokes don’t interfere with each other, the bracing angle is slightly different resulting in a different tension.
I did have a slight issue where, when bringing the front wheel to tension and trying to hit the Light Bicycle recommended tension of ~145kgf, the inner Squorx heads broke off three nipples. After this I detensioned the wheel and brought it back up to a lower, but still appropriate, spec. (In the process of figuring this out I ended up cutting two spokes as the nipples couldn’t easily be turned. After the third I detensioned the wheel and decided to build to a lower tension.)
Final tension for the wheels are as follows, with the small number the indicator on a Park Tool TM-1:
Front Wheel (NDS / L / Brake Side is Steeper Bracing Angle):
Per usual with carbon rims building is a matter of centering the rim, eliminating runout, and detensioning the spokes. There’s really no truing (in the traditional sense) because single-spoke tension doesn’t really affect a stiff carbon rim.
Out of pocket cost was $651.13 on top of of the original wheelset, for a total of $1519.27 (excluding tires and sealant and whatever I can sell the old rims for):
A complete Industry Nine Hydra Trail S Carbon would cost about $2015 (with Shipping + Tax), about $500 more than the end cost of building these. While this set doesn’t have the US-made Reynolds Blacklabel rims, I’ve been happy with Light Bicycle rims on previous bikes and anticipate these’ll be just as good.
The final build, without tape/valves/tires/rotors/cassette, comes in at 794g for the front wheel and 917g for the rear wheel (1711g total). This is a 51g savings over the Trail S Hydra build when going to wider and stiffer rims. This isn’t enough weight savings to notice, but at least it didn’t add anything.
When putting the wheels back together I fitted the old tires as they still have a good bit of life left. I also used the original valve stems from Industry Nine as they are a bit shorter and I prefer the brass body versus the aluminum valves that came with the rims. It also turns out that Light Bicycle provided more than 2x as much tape as needed for the rims, which is great for future spare use. (The rims came with two rolls, one roll did both with plenty to spare.)
[1] The shipping notification states: “It is scheduled to board a Matson Liner’ ship for a sea journey of about 3-4 weeks before its arrival at Los Angeles port in the US. Then UPS will pick the package up to manage the local delivery for you. It is only when the pickup is made, the information at UPS website will be updated further as well as you could reach out to UPS by calling 800-742-5877 for quicker help then.”
This week I moved from UniFi to a new setup that included OPNsense on the edge to handle firewall, NAT, and other such tasks on the home network. Built in to OPNsense is a basic NetFlow traffic analyzer called Insight. Looking at this and turning on Reverse lookup something strange popped out: ~22% of the traffic coming in from the internet over the last two hours was from just two hosts: dynamic-75-76-44-147.knology.net and dynamic-75-76-44-149.knology.net.
While reverse DNS worked to resolve the IPs to hostnames (75.76.44.147 to dynamic-75-76-44-147.knology.net and 75.76.44.149 to dynamic-75-76-44-149.knology.net), forward lookup of those hostnames didn’t work. This didn’t really surprise me as the whole DNS situation on the WOW/Knowlogy network is poor, but it did make me more curious. Particularly strange was the IPs being are so close together.
To be sure this is Knology (ruling out intentionally-misleading reverse DNS) I used whois to confirm the addresses are owned by them:
My home ISP is Wide Open West (WOW), and Knology is an ISP that they bought in 2012. While I use my ISP directly for internet access (no VPN tunnel to elsewhere), I run my own DNS to avoid their service announcement redirections, so why would I be talking to something else on my ISP’s network?
Could this be someone doing a bunch of scanning of my house? Or just something really misconfigured doing a bunch of broadcasting? Let’s dig in and see…
First I used the Packet capture function in OPNsense to grab a capture on the WAN interface filtered to these two IPs. Looking at it in Wireshark showed it was all HTTPS. Hmm, that’s weird…
A couple coworkers and I have Plex libraries shared with each other, maybe that’s it? The port isn’t right (Plex usually uses 32400) but maybe one of them are running on it in 443 (HTTPS)… But why the two IPs so close to each other? Maybe one of them are getting multiple IPs from their cable modem, have dual WAN links configured on their firewall, and it’s bouncing between them… (This capture only showed the middle of a session, so there was no certificate exchange present to get any service information from.)
Next I did another packet capture on the LAN interface to see if it’s a computer on the network or OPNsense as the local endpoint. This showed it’s coming from my main personal computer, a 27″ iMac at 192.168.0.8 / myopia.--------.nuxx.net, so let’s look there. (Plex doesn’t run on the iMac, so that’s ruled out.)
Conveniently the -k argument to tcpdump on macOS adds packet metadata, such as process name, PID, etc. A basic capture/display on myopia with tcpdump -i en0 -k NP host 75.76.44.149 or 75.76.44.147 to show all traffic going to and from those hosts identified Firefox as the source:
07:39:57.873076 pid firefox.97353 svc BE pktflags 0x2 IP myopia.--------.nuxx.net.53515 > dynamic-75-76-44-147.knology.net.https: Flags [P.], seq 19657:19696, ack 20539524, win 10220, options [nop,nop,TS val 3278271236 ecr 1535621504], length 39 07:39:57.882070 IP dynamic-75-76-44-147.knology.net.https > myopia.--------.nuxx.net.53515: Flags [P.], seq 20539524:20539563, ack 19696, win 123, options [nop,nop,TS val 1535679857 ecr 3278271236], length 39
Well, okay… Odd that my browser would be talking so much HTTPS to my ISP directly. I double-checked that DNS-over-HTTPS was disabled, so it’s not that…
Maybe I can see what these servers are? Pointing curl at one of them to show the headers, the server header indicated proxygen-bolt which is a Facebook framework:
Finally I pointed openssl at the IP to see what certificate it’s presenting and it’s a wildcard cert for a portion of Facebook’s CDN:
c0nsumer@myopia Desktop % openssl s_client -showcerts -connect 75.76.44.149:443 </dev/null CONNECTED(00000003) depth=2 C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert High Assurance EV Root CA verify return:1 depth=1 C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert SHA2 High Assurance Server CA verify return:1 depth=0 C = US, ST = California, L = Menlo Park, O = "Facebook, Inc.", CN = *.fdet3-1.fna.fbcdn.net verify return:1 [SNIP]
As a final test I restarted tcpdump on the iMac then closed the Facebook tab I had open in Firefox and the traffic stopped.
So there’s our answer. All this traffic is to Facebook CDN instances on the Wide Open West / Knology network. It sure seems like a lot for a tab just sitting open in the background, but hey… welcome to the modern internet.
I could have received more information from OPNsense’s Insight by clicking on the pie slice shown above to look at that host in the Details view, but it seems to have an odd quirk. When the Reverse lookup box is checked, clicking the pie slice to jump to the Details view automatically puts the hostname in the (src) Address field, which returns no results (it needs an IP address). I thought this was the tool failing, so I looked to captures for most of the info.
Later on I realized that filtering on the IP showed a bunch more useful information, including two other endpoints within the network talking to these servers (mobile phones), and that HTTPS was also running over UDP, indicating QUIC.
(Bug 4609 was submitted for this issue and AdSchellevis fixed it within a couple hours via commit c797bfd.)
In looking at photos of the fork, test fitting with end caps, and as confirmed by Salsa directly, the Kingpin Deluxe has 31mm dropouts designed to fit the SRAM-developed Torque Cap end caps. Originally intended to make suspension forks less prone to twisting, the larger 31mm OD end caps, instead of the standard 21mm OD, strengthen the wheel/axle interface. This is well documented elsewhere, and end caps are available for most higher end wheel sets (I9, DT Swiss, etc), but it only appeared on non-fatbike RockShox suspension forks… until now.
It’s not clear to me why Salsa chose to put 31mm dropouts on the already-stiff, rigid, carbon Kingpin Deluxe fork, but they did. My guess is they saw potential for dynamo hubs — which generate power via forces between the still axle and moving hub shell — to use Torque Caps so they have a larger interface with the hub. After all, one of the new features of the Kingpin Deluxe fork is internal routing for dynamo hubs.
The only downside to including 31mm dropouts is that without Torque Cap end caps on the hub the wheel won’t self-center on the axle making wheel installation a little bit fiddly. In practice this isn’t a problem, and Newmen made stick-on Torque Cap Fork Reducers to mitigate it, but I wanted to see if I could get some actual Torque Caps for my DT Swiss 350 Big Ride Center Lock hubs (H350DCIXR32SA6259S) to do it right.
After a bit of email with Logan, one of the ever-helpful folks at DT Swiss, I learned that unlike all their other hubs, these have equal-length end caps and until now there weren’t any fat bike forks with 31mm dropouts, so DT doesn’t have a Torque Cap kit for these hubs. Logan suggested that I pick up two of the HWGXXX0009100S kits for regular 350 hubs, then use the longer pieces on each side of the hub, figuring this should fit. While this was a bit pricey (~$65), it felt like the right choice so the wheel would match the fork.
The kits arrived and just as Logan had calculated these caps dropped right in and now the end caps and fork match. Hopefully in the future DT Swiss will offer a kit that has just the necessary parts so others won’t have to buy two as well.
With consolidating and upgrading my home network I’m moving Pi-hole from a stand-alone Raspberry Pi to running under Docker on my Synology DS1019+ running DiskStation Manager (DSM) v6.2.3.
This was a little bit confusing at first as the web management UI would work, but DNS queries weren’t getting answered. This ended up being caused by the bonded network interface, which is ovs_bond0 instead of the normal default of eth0.
Using the official Pi-hole Docker image, set to run with Host networking (Use the same network as Docker host in the Synology UI), setting or changing the following variables will set up Pi-hole work from first boot, configured to:
Listen on ovs_bond0 (instead of the default eth0).
Answer DNS queries on the same IP as DSM (192.168.0.2).
Run the with the web-based management interface on port 8081 with password piholepassword.
Send internal name resolutions to the internal DNS/DHCP server at 192.168.0.1 for clients *.internal.example.com within 192.168.0.0/24.
Set the displayed temperature to Farenheit and time zone to America/Detroit.
Listen for HTTP requests on http://diskstation.internal.example.com:8081 along side the default pi.hole hostname.
Additionally, setting up volumes for /etc/dnsmasq.d/ and /etc/pihole/ will ensure changes to the UI persist across restarts and container upgrades. I do this as shown here:
Note: If you stop the Pi-hole container, clear out the contents of these directories, and then restart the container, Pi-hole will set itself up again from the environment variables. This allows tweaking the variables without recreating the container each time.
UPDATE: With the update to Synology DSM 7.0 the interface is now called bond0.
Consider the following: You are attempting to update OpenStreetMap (OSM) trail routes using JOSM and find that the previous way is very detailed, but fairly wrong, meaning that a lot of nodes will need to be moved.
Even with the Improve Way Accuracy tool this’ll be a pain. So what can you do? First decrease the number of nodes using Simplify Way and then move the remaining nodes, adding new ones as needed in the gaps. This will keep the original way intact, keep most of the route present, but allow for easy cleanup. It also reduces the number of nodes, making for simpler routes that take up less space on GPS devices. (I find that a maximum error setting of 0.5m or 1m works well.)
In 2016 I used the official GIS data from the Noquemanon Trails Network (NTN) to add the singletrack trails to OSM. This worked pretty well, but since then it’s become possible to trace the Strava Global Heatmap high-resolution data when mapping. When doing some routine updates and using this layer for assistance I noticed how many trails originally entered using the NTN’s official data aren’t quite correct. So along with adding changes, I’m tweaking the trail routes using the Strava data.
The primary issue is that the official data would often have a large number of points very close together — in some cases just inches apart — particularly around curves. These points were much closer than needed for accurate mapping, and yet these curves would be the main things that needed adjusting. Moving all of these points would be a hassle and the resolution wasn’t necessary, so by simplifying the route, correcting the nodes that remain, and adding in more as needed, cleanup of the route is much faster. It also reduces the number of nodes along each way, saving space.
The following images show a great example of this problem along Mossy (way 40781586), the last piece of single track in Pioneer Loop (relation 6109593) when ridden clockwise from the trailhead: