Press "Enter" to skip to content

nuxx.net Posts

Simple PAC File Pilot Testing (including WPAD)

In a network that’s isolated from the public internet, such as many enterprise networks, proxy servers are typically used to broker internet access for client computers. Configuring the client computers to use these proxies is often done via a Proxy Auto-Config (PAC) file, code that steers requests so traffic for internal sites stays internal, and public sites go through the proxies.

Commonly these PAC files are made available via Web Proxy Auto-Discovery Protocol (WPAD) as well, because some systems need to automatically discover them. Specifically, in a Windows 10 environment which uses proxies, WPAD is needed because many components of Windows (including the Microsoft Store and Azure Device Registration) will not use the browser’s PAC file settings; it’s dependent on WPAD to find a path to the internet.

WPAD is typically configured via DNS, with a hostname of wpad.companydomain.com (or anything in the DNS Search Suffix List) resolving to the IP of a webserver [1]. This server must then answer an HTTP request for http://x.x.x.x/wpad.dat (where x.x.x.x is the server’s IP) or http://wpad.company.com/wpad.dat with a PAC file, with a Content-Type of x-ns-proxy-autoconfig [2].

Because WPAD requires DNS, something which can’t easily be changed for a subset of users, putting together a mechanism to perform a pilot deployment of a new PAC file can be a bit complicated. When attempting to perform a pilot deployment engineers will often send out a test PAC file URL to be manually configured, but this misses WPAD and does not result in a complete system test.

In order to satisfy WPAD, one can set up a simple webserver to host the new PAC file and a DNS server to answer the WPAD queries. This DNS server forwards all requests except for those for the PAC file to the enterprise DNS, so everything else works as normal. Testing users then only need to change their DNS to receive the pilot PAC file and everything else will work the same; a true pilot deployment.

Below I’ll detail how I use simplified configurations of Unbound and nginx to pilot a PAC file deployment. This can be done from any Windows machine, or with very minor config changes from something as simple as a Raspberry Pi running Linux.

[1] WPAD can be configured via DHCP, but this is only supported by a handful of Microsoft applications. DNS-based WPAD works across all modern OS’.

[2] Some WPAD clients put the server’s IP in the Host: field of the HTTP request.

DNS via Unbound

Unbound is a DNS server that’s straightforward to run and is available on all modern platforms. It’s perfect for our situation where we need to forward all DNS queries to the production infrastructure, modifying only the WPAD/PAC related queries to point to our web server. While it’s quite robust and has a lot of DNSSEC validation options, we don’t need any of that.

This simple configuration forwards all requests to corporate Active Directory-based DNS’ (10.0.1.2 and 10.0.2.2) for everything except the PAC file servers. For these, pacserver.example.com and wpad.example.com, it’ll intercept the request and return our webserver’s address of 10.0.3.25.

server:
interface: 0.0.0.0
access-control: 0.0.0.0/0 allow
module-config: "iterator"

local-zone: "wpad.example.com." static
local-data: "wpad.example.com. IN A 10.0.3.25"

local-zone: "pacserver.example.com." static
local-data: "pacserver.example.com. IN A 10.0.3.25"

stub-zone:
name: "."
stub-addr: 10.0.1.2
stub-addr: 10.0.2.2

This configuration allows recursive queries from any hosts, but by specifying one or more subnets using access-control clauses to you can restrict from where it is usable. The stub-zone clause to send all requests up to two DNS’. If these upstream DNS’ handle recursion for the client, the forward-zone clause can be used instead.

PAC File via nginx

For serving up the PAC file, both for direct queries and those from WPAD, we’ll use nginx, a powerful but easy to use web server to which we can give a minimal config.

Put a copy of your PAC file at …/html/wpad.dat under nginx’s install directory so the server can find it. (There is great information on writing PAC files at FindProxyForUrl.com.)

This simple configuration will set up a web server which serves all files as MIME type application/x-ns-proxy-autoconfig, offering up the wpad.dat file by default (eg: http://pacserver.example.com) or when directly referenced (eg: http://10.0.3.25/wpad.dat or http://wpad.example.com/wpad.dat), satisfying both standard PAC file and WPAD requests.

events {
worker_connections 1024;
}

http {
default_type application/x-ns-proxy-autoconfig;
sendfile on;
keepalive_timeout 65;

server {
listen 80;
server_name localhost;

location / {
root html;
index wpad.dat;
}
}
}

Putting It All Together

With all the files in place and unbound and nginx running, you’re ready to go. Instruct pilot users to manually configure the new DNS, or push this setting out via Group Policy, VPN settings, or some other means. These users will then get the special DNS response for your PAC and WPAD servers, get the pilot PAC file from your web server, and be able to test.

Comments closed

Salsa Timberjack: Fixing a Mistake

After building up the All-City Electric Queen and riding it a handful of times, it just didn’t feel… right. Standing and handling the bike was fine, but seated pedaling — especially when climbing or just after sitting back down — felt quite off. Turns out the problem was the 71° seat tube angle on the frame coupled with my shorter femurs; I simply can’t get the saddle far enough forward on one of these frames.

Unfortunately, the only good solution was to get a different frame. The Salsa Timberjack was my original choice for this hard tail trail bike build, but I got excited by the idea of a steel bike, loved the paint on the Electric Queen, and glossed over the seat tube angle. By the time I realized I needed a new frame the stand-alone black frames were no longer available. Fortunately, over at the excellent Sports Rack Marquette, Evan had some new frames from complete bikes available, and I was able to get a beautiful gloss teal frame from a 2019 Timberjack Deore 27.5+ from them.

Besides matching my wants geometry-wise, the frame is a great choice because all parts except for the headset swapped over, and the frame came with a headset. While the stock Cane Creek 10 is a lower end part, which lacks sealing on the top bearing cover and has a plastic compression rings and crown races and black oxide bearings, it works and will be fine for a while. The fork was already fitted with a higher end matching crown race, and I have a Cane Creek Hellbender 70 headset ready swap in once the bearings and compression ring start to go.

One downside to the Timberjack vs. Electric Queen is that I’ll no longer have a rigid fork for the bike, but if I really want one the Firestarter 110 Deluxe is a perfect match. The top tube on this bike is also a little bit tall, as it’s also designed for bikepacking and fitting a top tube bag, but it’s plenty comfortable to ride and I love all the bottle cage options.

To round out the build and get the colors nice I ordered some new fork decals from Slik Graphics. Unfortunately, I screwed up and ordered the decals for the Factory-series forks, so while it looks good, I technically have the wrong upper logo on the fork lowers. I’ve since ordered another set with the proper Performance decals for the upper, and am waiting for them to arrive. Since this order was placed Slik became involved in a dispute with Fox, so I’m hoping to receive the updated decals. Even if they don’t arrive, at least the colors are right on the fork. I could even remove the upper decals and have it still look good.

When finishing up the build I ran into a significant problem with the brakes: squealing and vibration. Due to part availability I’d purchased the calipers and levers as a non-retail / meant-for-complete-bikes / likely grey market set from a well-known eBay seller, ronde-cycling. I was never able to get them bedded in properly, and after a few rides they began squealing horridly and shuddering under hard braking. This seller offers different pads options with the brakes, and I began to suspect they handle the pads with each brake set sale, did so poorly, and contaminated the pads before they got to me.

I tried the normal recommendations of cleaning everything, sanding the pads and rotors, and even baking the pads in the oven, but on each bed-in procedure they’d begin squealing again. Resolution a set of new J04C pads and a bed-in and now the brakes are working great. At ~$50 for a new set of pads this really added to the cost of the brakes, but at least they are now working.

Final build, with water bottle cages, pedals, and computer came out to right around 27 pounds. And, it fits! Since building it I’ve put over 180 miles and nearly 16 hours on the bike, haven’t touched the geometry, and I’m really happy with the result. It’s exactly what I wanted; a high quality hard tail trail bike.

Full details below:

Frame: Salsa Timberjack (Large, Teal, 2019)
Fork: Fox 34 Step-Cast (Performance, FIT4 damper, Black Upper Tube Finish, 120mm, 51mm offset, 15QR)
Fork Decals: Slik Graphics Fox 34 Step-Cast Factory Style Decal Kit / Fox 34 Step Cast Performance Elite Decal Kit (Color 1: Medium Grey, Color 2: Dark Grey, Finish: Matte)
Headset: Cane Creek 10 (Black, ZS44/ZS56)
Crankset: SRAM X1 1400 GXP
Bottom Bracket: SRAM GXP (Black)
Chainring: SRAM X-SYNC 2 (32t, steel, Direct Mount, 3mm / Boost)
Derailleur: SRAM GX Eagle
Shifter: SRAM GX Eagle
Shift Cables / Housing: Shimano Bulk
Cassette: SRAM XG-1275
Brakes: Shimano SLX M7100 (Levers: BL-M7100, Calipers: BR-7100)
Brake Pads: Shimano J04C (Finned, Metal)
Front Rotor: SM-RT86-L (203mm)
Rear Rotor: SM-RT86-M (180mm)
Front Brake Adapter: SM-MA-F203P/P (160mm Post to 203mm Post)
Rear Brake Adapter: Shimano SM-MA-R180P/S (IS to 180mm Post)
Stem: Salsa Guide (+6°, 60mm)
Bar: Salsa Salt Flat (750mm)
Wheels: Industry Nine Trail S Hydra 28H (29″)†
Tires: Maxxis Rekon (29 x 2.4″, 3C/EXO/TR)
Seatpost: Fox Transfer Performance Elite (2020, Black, 125mm, 30.9mm, Internal)
Dropper Lever: Wolf Tooth ReMote Light Action (Black, 22.2mm Clamp)
Seatpost Collar: Salsa Lip-Lock (Black, 35.0 mm)
Saddle: Specialized Power Expert (143mm)
Pedals: Crank Brothers Eggbeater 3 (Green, from Blackborow)
Grips: ESI Extra Chunky (Black)
Bottle Cages: Specialized Zee Cage II (Black Gloss, 1x Left, 2x Right)
Computer: Garmin Edge 530, Garmin Speed and Cadence Sensors (v1), Best Tek Garmin Stem Mount
Bell: Mirracycle Original Incredibell (Black)
Derailleur Hanger: QBP FS1373
Frame Protection Tape: 3M 2228, McMaster-Carr UHMW PE
Cable Rattle Prevention: Frost King EPDM Weatherseal (V25A, slipped over dropper housing)

Comments closed

CRAMBA Trails Outline Poster from OSM Data

Finding myself a little bored, I put together a poster (11″ x 17″) showing outlines of the CRAMBA-supported trails on one overview. (Link)

This ended up being more popular than I expected, with a handful of people wanting to know how I did it, so I’ll detail the steps here:

  1. Ensure that all the trail routes are in OpenStreetMap.
  2. Using JOSM load each trail area one at a time and make an OSM XML file with just the data you want outlined:
    1. Select the ways which comprise the trail you want shown.
    2. Create a new data layer (Command-N).
    3. Make the original data layer active.
    4. Copy the selected data from the first layer to your new layer with EditMerge Selection (Shift-Command-M).
    5. Hide the original data layer.
    6. Review the new layer to be sure it has everything you want.
    7. Select all nodes and ways (Command-A) and remove all tags to make later processing easier.
    8. Look good? Is everything you want in the new layer? Save it to a .osm file and do the next trail.
  3. Once you have an OSM file for each trail, convert them to Adobe Illustrator format using this version of osm2ai.pl.
    1. Get osm2ai.pl working on your computer. I run this on macOS, and it works fine on Linux as well. Since it’s a Perl script there are probably some dependencies; likely resolved by installing a few modules.
    2. Process each OSM file with: osm2ai.pl --input infilename.osm --projection mercator --output outfilename.ai
  4. Open each file in Illustrator, combine them into a larger document, make it look the way you want, etc.
  5. Done!

Comments closed

Archiving Gallery 2 with HTTrack

Along with the static copy of the MediaWiki, I’ve been wanting to make a static, archival copy of the Gallery 2 install that I’ve been using to share photos, for 15+ years, at nuxx.net/gallery. Using HTTrack I was able to do so, after a bit of work, resulting in a copy at the same URL and with images accessed using the same paths, from static files.

The result is that I no longer need to run the aging Gallery 2 software, yet links and embedded images that point to my photo gallery did not break.

In the last few years I’ve both seen the traffic drop off, I haven’t posted many new things there, and it seems like the old Internet of pointing people to a personal photo gallery is nearly dead. I believe that blog posts, such as this, with links to specific photos, are where effort should be put. While there is 18+ years of personal history in digital images in my gallery, it doesn’t get used the same way it was 10 years ago.

On the technical side, the relatively-ancient (circa 2008) Gallery 2 has and the ~90GB of data in it has occasionally been a burden. I had to maintain an old copy of PHP just for this app, and this made updating things a pain. While there is a recent project, Gallery the Revival, which aims to update Gallery to newer versions of PHP, this is based around Gallery 3 and a migration to that brings about its own problems, including breaking static links.

I’m still not sure if I want to keep the gallery online but static as it is now, put the web app back up, completely take it off the internet and host it privately at home, or what… but figuring out how to create an archive has given me options.

What follows are my notes on how I used HTTrack, a package specifically designed to mirror websites, to archive nuxx.net’s Photo Gallery. I encountered a few bumps along the way, so this details each and how it was overcome, resulting in the current static copy. To find each of these I’d start HTTrack, let it run for a while, see if it got any errors, fix them, then try again. Eventually I got it to archive cleanly with zero errors:

Gallery Bug 83873

During initial runs, HTTrack finished after ~96MB (out of ~90GB of images) saved, reporting that it was complete. The main portions of the site looked good, but many sub-albums or original-resolution images were zero-byte HTML files on disk and displayed blank in the browser. This was caused by Gallery bug 83873, triggered by using HTTPS on the site. It seems to be fixed by adding the following line just before line 780 in .../modules/core/classes/GallerySession.class:

GalleryCoreApi::requireOnce('modules/core/classes/GalleryTranslator.class');

This error was found by via the following in Apache’s error log:

AH01071: Got error 'PHP message: PHP Fatal error: Class 'GalleryTranslator' not found in /var/www/vhosts/nuxx.net/gallery/modules/core/classes/GallerySession.class on line 780\n', referer: http://nuxx.net/gallery/

Minimize External Links / Footers

To clean things up further, minimizing external links, and make the static copy of the site as simple as possible, I also removed external links in footer by commenting out the external Gallery links and version from the footer, via .../themes/themename/templates/local/theme.tpl and .../themes/themename/templates/local/error.tpl:

<div id="gsFooter">
{*
{g->logoButton type="validation"}
*{g->logoButton type="gallery2"}
*{g->logoButton type="gallery2-version"}
*{g->logoButton type="donate"}
*}
</div>

Remove Details from EXIF/IPTC Plugin

The EXIF/IPTC Plugin for Gallery is excellent because it shows embedded metadata from the original photo, including things like date/time, camera model, location. This presents as a simple Summary view and a lengthier Details view. Unfortunately, when being indexed by HTTrack, selecting of the Details view — done via JavaScript — returns a server error. This shows up in the HTTrack UI as an increasing error count, and server errors as some pages are queried.

To not have a broken link on every page I modified the plugin to remove the Summary and Details view selector so it’d only display Summary, and used the plugin configuration to ensure that every field I wanted was shown in the summary.

To make this change copy .../modules/exif/templates/blocks/ExifInfo.tpl to .../modules/exif/templates/blocks/local/ExifInfo.tpl (to create a local copy, per the Editing Templates doc). Then edit the local copy and comment out lines 43 through 60 so that only the Summary view is displayed:

{* {if ($exif.mode == 'summary')}
* {g->text text="summary"}
* {else}
* <a href="{g->url arg1="controller=exif.SwitchDetailMode"
* arg2="mode=summary" arg3="return=true"}" onclick="return exifSwitchDetailMode({$exif.blockNum},{$item.id},'summary')">
* {g->text text="summary"}
* </a>
* {/if}
* &nbsp;&nbsp;
* {if ($exif.mode == 'detailed')}
* {g->text text="details"}
* {else}
* <a href="{g->url arg1="controller=exif.SwitchDetailMode"
* arg2="mode=detailed" arg3="return=true"}" onclick="return exifSwitchDetailMode({$exif.blockNum},{$item.id},'detailed')">
* {g->text text="details"}
* </a>
* {/if}
*}

Disable Extra Plugins

Finally, I disabled a bunch of plugins which both wouldn’t be useful in a static copy of the site, and cause a number of interconnected links which would make a mirror of the site overly complicated:

  • Search: Can’t search a static site.
  • Google Map Module: Requires a maps API key, which I don’t want to mess with.
  • New Items: There’s nothing new getting posted to a static site.
  • Slideshow: Not needed.

Fix Missing Files

My custom theme, which was based on matrix, linked to some images in the matrix directory which were no longer present in newer versions of the themes, so HTTrack would get 404 errors on these. I copied these files from my custom theme to the .../themes/matrix/images directory to fix this.

Clear Template / Page Cache

After making changes to templates it’s a good idea to clear all the template caches so all pages are rendering with the above changes. While all these steps may be overkill, I do this by going into Site Admin → Performance and setting Guest Users and Registered Users to No acceleration. I then uncheck Enable template caching and click Save. I then click Clear Saved Pages to clear any cached pages, then re-enable template caching and Full acceleration for Guest Users (which HTTrack will be working as).

PANIC! : Too many URLs : >99999

If your Gallery has a lot of images, HTTrack could quit with the error PANIC! : Too many URLs : >99999. Mine did, so I had to run it with the -#L1000000 argument so that it’ll then be limited to 1,000,000 URLs instead of the default 99,999.

Run HTTrack

After all of this, I ran the httrack binary with the security (bandwidth, etc) limits disabled (--disable-security-limits) and used its wizard mode to set up the mirror. The URL to be archived was https://nuxx.net/gallery/, stored in an appropriately named project directory, with no other settings.

CAUTION: Do not disable security limits if you don’t have good controls around the site you are mirroring and the bandwidth between the two. HTTrack has very sane defaults for rate limiting when mirroring that keep its behavior polite, it’s not wise to override these defaults unless you have good control of the source and destination site.

When httrack begins it shows no progress on screen, so I quit with Ctrl-C, switched to the project directory, and ran httrack --continue to allow the mirror to continue and show status info on the screen (the screenshot above). The argument --continue can be used to restart an interrupted mirror, and --update can be used to freshen up a complete mirror.

Alternately, the following command puts this all together, without the wizard:

httrack https://nuxx.net/gallery/ -W -O "/home/username/websites/nuxx.net Photo Gallery" -%v --disable-security-limits -#L1000000

As HTTrack spiders the site it comes across external links and needs to know what to do with them. Because I didn’t specify an action for external links on the command line, it prompts with the question “A link, [linkurl], is located beyond this mirror scope.”. Since I’m not interested in mirroring any external sites (mostly links to recipes or company websites) I answer * which is “Ignore all further links and do not ask any more questions” (text in httrack.c). (I was unable to figure out how to suppress this via a command line option before getting a complete mirror, although it’s likely possible.)

Running from a Dedicated VM

I ran this mirror task from a Linode VM, located in the same region as the VM hosting nuxx.net. This results in all traffic flowing over the Private network, avoiding bandwidth charge.

Because of the ~90GB of images, I set up a Linode 8GB, which has 160GB of disk, 8GB of RAM, and 4 CPUs. This should provide plenty of space for the mirror, with enough resources to allow the tool to work. This VM costs $40/mo (or $0.06/hr), which I find plenty affordable for getting this project done. The mirror took N days to complete, after which I tar’d it up and copied it a few places before deleting the VM.

By having a separate VM I was able to not worry about any dependencies or package problems and delete it after the work is done. All I needed to do on this VM was create a user, put it in the sudoers file, install screen (sudo apt-get install screen) and httrack (sudo apt-get install httrack), and get things running.

Wrapping It All Up

After the mirror was complete I replaced my .../gallery directory with the .../gallery directory from the HTTrack output directory and all was good.

Comments closed

Archiving MediaWiki with mwoffliner and zimdump

For a number of years on nuxx.net I used MediaWiki to host technical content. The markup language is nearly perfect for this sort of content, but in recent years I haven’t been doing as much of this and maintaining the software became a bit of a hassle. In order to still make the content available but get rid of the actual software, I moved all the content to static HTML files.

These files were created by creating a ZIM file — commonly used for offline copies of a website — and then extracting that file. The extracted files, a static copy of the MediaWiki-based site, was then made available using Apache.

You can get the ZIM file here, or browse the new static pages here.

Here’s the general steps I used to make it happen.

Create ZIM file: mwoffliner --mwUrl="https://nuxx.net/" --adminEmail=steve@nuxx.net --redis="redis://localhost:6379" --mwWikiPath="/w/" --customZimFavicon=favicon-32x32.png

Create HTML Directory from ZIM File: zimpdump -D mw_archive outfile.zim

Note: There are currently issues with zimdump and putting %2f HTML character codes in filenames instead of creating paths. This is openzim/zim-tools issue #68, and will need to be fixed by hand.

Consider using find . -name "*%2f*" to find problems with files, then use rename 's/.{4}(.*)/$1/' * (or so) to fix the filenames after moving them into appropriate subdirectories.

If using Apache (as I am) create .htaccess to set MIME Types Appropriately, turning off the rewrite engine so higher-level redirects don’t affect things:

<FilesMatch "^[^.]+$">
ForceType text/html
</FilesMatch>

RewriteEngine Off

Link to http://sitename.com/outdir/A/Main_Page to get to the original main wiki page. In my case, http://nuxx.net/wiki_archive/A/Main_Page.

 

Comments closed

Seat Tube Angle Makes a Big Difference

When I started planning the All-City Electric Queen my main goal was to have a hard tail version of my 2016 Specialized Camber. That is, nice suspension, 120mm of travel in the fork, dropper, and a very similar RAD / RAAD. The only dimension I wanted to change was the seat tube angle, as with the 75° tube on the Camber and I need to position a saddle pretty far back on the rails, even with the 10mm setback on the setback Command Post IRCC.

Unfortunately, I think I went too far.

When comparing geometries I saw the Electric Queen has a 71° seat tube and figured that’d be perfect for moving the saddle back, better centering it in the zero-offset Fox Transfer post. What I failed to do was the sort of calculation shown above to illustrate just how far back the saddle would be moved by every degree of seat tube angle change.

It turns out that, at 725mm of saddle height, every degree of seat tube angle change moves the saddle about 12mm. Thus, on my Camber which has a 75° seat tube, 10mm offset post, and the saddle about 15mm behind centered, I should have been looking for a ~73° seat tube angle in order to roughly center the saddle on a straight (0mm offset) seatpost. Going to a 71° seat tube moved the center point a full 48mm — nearly two inches — rearward.

During single track rides on the Electric Queen I’ve ended up with a sore lower back when pushing, I’m not comfortable at higher cadences, and my stomach brushes my thighs when leaned over and seated. All of this is a sign of the saddle being too far back. With the 71° seat tube angle I simply can’t get the saddle far enough forward: it’s center position is 38mm behind where it is in the Camber (with it’s 10mm offset post). There’s just not enough fore/aft room for adjustment on a saddle to make up for this. When just 5mm or 10mm, especially rearward, makes a difference this isn’t good.

This all then brings about the question: did I buy too large of a frame? The answer is: no, I bought a frame with the wrong seat tube angle for my body.

Fitted forward of the bottom bracket — based on stack and reach — the frame fits properly. Where it doesn’t work is from the bottom bracket to the saddle, all due to the 71° seat tube angle. All of the Electric Queen frames have the same seat tube angle, with the only real variances between sizes being stack, reach, and seat tube length.

The problem is that my body shape (shorter legs, likely shorter femurs specifically, longer torso) not being compatible with this frame.

This consistent 71° seat tube angle, from X-Large all the way down to the X-Small frame, makes me wonder how this frame can fit smaller size riders, unless maybe they have exceptionally long femurs. I wear pants with a 30″ inseam — fairly short but not tiny by any stretch — and my legs seem normally proportioned, so if this setback doesn’t work for me there’s likely a whole bunch of other folks it won’t work for either. Or maybe I just have short femurs?

So, where do I go from here?

I’m going to experiment with some other saddles around home (eg: Specialized Phenom) and see if I can make it fit, but it looks like the only choice for getting a comfortably ridable bike will be another frame with a steeper seat tube angle. On first look it seems like the Ritchey Ultra (Large) or Kona Honzo ST (Medium) would work, with the Ultra having a slight edge in with 73.5° seat tube and taking the same size seatpost as the Electric Queen.

Comments closed

All-City Electric Queen: Rugged XC / Trail Hard Trail Bike

Nearly four years after getting a full suspension XC / trail bike (2016 Specialized Camber) and beginning to really enjoy riding with a bit more travel and a dropper post I started getting the itch to build a similarly spec’d hard tail. Picking up a few particularly good deals (frame/fork), shopping smartly, using some parts I’d collected over the years, and getting some parts from friends, I ended up putting together an All-City Electric Queen.

This steel frame from a QBP brand best known for urban bikes, fitted with a smart choice drivetrain (SRAM GX), higher end wheels (I9 Hydra hubs) and suspension (Fox 34), and reliable brakes (Shimano SLX) is the solid spec build that I was wanting. The frame came complete with a matching rigid fork, and I anticipate switching over to the rigid fork in springtime or when the Fox 34 is out for service.

After the first shakedown ride at Stony Creek I’m pretty happy with the bike. The geometry of the frame itself is a little curious, but it seems to fit me well. Specifically, it has a fairly slack seat tube, tall head tube, tall seat tube, and rather high top tube. This means I have minimal stand-over (akin to the Titus Racer X 29er) and the saddle has to sit quite-forward on the rails, but it seems to work and feels good when pedaling. It also results in the controls (dropper lever and shifter) hitting the top tube when turning the bars, so I’ve had to pad the top tube with some 3M mastic tape.

I suspect this frame would work well for the sort of long-leg/short-torso rider who often finds themselves on a custom frame, but it also works well for me. It also means that a slim top tube bag like the Relevate Designs Tangle would fit along with bottles. Or, it could have been placed almost 1.5″ lower (with a shorter head tube and seat tube) and still been good… But I’m not a frame designer, so I suspect there’s some good but unknown-to-me reason why this was done.

Coming in at 28 pounds it’s not a light bike, but being a cheaper steel frame, 120mm fork with 34mm stanchions, 2.4″ wide quite-knobby tires, dropper post, and aluminum rims I’m okay with it. Going to some carbon bits (eg: bar) I could drop some weight, but I don’t think it’ll matter much, as this is roughly the same weight as the Camber, which I love riding.

Part details are below, and photos of the build showing everything from parts to checking clearances, can be found here: All-City Electric Queen.

Frame: All-City Electric Queen (Large, Lavender/Lime w/ Splatter)
Fork: Fox 34 Step-Cast (Factory, FIT4 damper, Black Upper Tube Finish, 120mm, 51mm offset, 15QR)
Headset: Cane Creek SlamSet (Black, ZS44/EC44/40)
Crankset: SRAM X1 1400 GXP
Bottom Bracket: SRAM GXP (Black)
Chainring: SRAM X-SYNC 2 (32t, steel, Direct Mount, 3mm / Boost)
Derailleur: SRAM GX Eagle
Shifter: SRAM GX Eagle
Shift Cables / Housing: Shimano Bulk
Cassette: SRAM XG-1275
Brakes: Shimano SLX M7100 (Levers: BL-M7100, Calipers: BR-7100)
Front Rotor: SM-RT86-L (203mm)
Rear Rotor: SM-RT86-M (180mm)
Front Brake Adapter: SM-MA-F203P/P (160mm Post to 203mm Post)
Rear Brake Adapter: Shimano SM-MA-R180P/S (IS to 180mm Post)
Stem: Specialized Stout XC, (+6°, 75mm)
Bar: Salsa Salt Flat (750mm)
Wheels: Industry Nine Trail S Hydra 28H (29″)†
Tires: Maxxis Rekon (29 x 2.4″, 3C/EXO/TR)
Seatpost: Fox Transfer Performance Elite (2020, Black, 125mm, 30.9mm, Internal)
Dropper Lever: Wolf Tooth ReMote Light Action (Black, 22.2mm Clamp)
Seatpost Collar: Salsa Lip-Lock (Black, 33.3mm)
Saddle: Specialized Power Expert (143mm)
Pedals: Crank Brothers Eggbeater 3 (Blue, from El Mariachi Ti)
Grips: ESI Extra Chunky (Black)
Bottle Cages: Specialized Zee Cage II (Black Gloss, 1x Left, 1x Right)
Computer: Garmin Edge 530, Garmin Speed and Cadence Sensors (v1)
Bell: Mirracycle Original Incredibell (Black)
Derailleur Hanger: QBP 687
Frame Protection Tape: 3M 2228, McMaster-Carr UHMW PE

Rigid Fork: All-City Electric Queen (Large, Lavender/Lime w/ Splatter)
Crown Race: Cane Creek BAA0009S (40-Series 52/30mm Steel)
Front Brake Adapter: Shimano SM-MA-F203P/S (IS to 203mm Post)

† Per I9, these are built with Sapim Race Straight-Pull spokes, 303mm all around. (For 29er. If these were 27.5 wheels they’d be 285mm.)

Comments closed

High Resolution Strava Global Heatmap in JOSM

Mapping trails in OpenStreetMap (OSM), using JOSM, is done by overlaying one or more data sources, then hand-drawing the trail locations using this background data as a guide. While not a default in JOSM, with a little know-how and a paid Strava Subscription the Strava Global Heatmap data for this can be used as well.

While there’s a fair bit of info about doing this scattered across some JOSM tickets (eg: #16100), this post is to document how I make this work for me by creating a custom Imagery provider. Because there isn’t an official (and robust) plugin to authenticate JOSM against Strava it’s a little tricky, but overall isn’t bad.

First, you need to sign into Strava, then go to the Global Heatmap and get the values of the three cookies CloudFront-Key-Pair-ID, CloudFront-Policy, and CloudFront-Signature. These values are essentially saved credentials, and by using them in a custom Imagery provider URL, JOSM will use these to access the Global Heatmap data via your account and make it available as an Imagery layer.

The easiest way to get these cookie values is to use the Developer Tools built into your web browser:

  1. Launch your web browser, either Firefox, Chrome, or Edge.
  2. Go to the Strava Global Heatmap, sign in if needed (check the Remember me checkbox), and be sure it’s properly displaying high resolution data.
  3. Load the developer tools for your browser to show cookie details:
    • Firefox: Click the Hamburger MenuWeb DeveloperStorage Inspector (or press Shift-F9), expand Cookies and click https://www.strava.com.
    • Chrome: Click the Three Dots Menu → More tools → Developer tools (or press Ctrl-Shift-I), select the Application tab, expand Cookies, then click https://www.strava.com.
    • Edge: Click the Three Dots Menu → More tools → Developer Tools (or press F12), select the Storage tab, expand Cookies, then click heatmap.
  4. Get the value of three cookies, CloudFront-Key-Pair-ID, CloudFront-Policy, and CloudFront-Signature, and make note of them (in Notepad or such). You’ll need them to build the custom URL. Here’s examples of what the three cookies will look like:
    • CloudFront-Key-Pair-ID: VB4DYFIQ7PHQZMAGVMOI
    • CloudFront-Policy: FTUGYikErADOqGmqEnPTaaZ3LCE2QbOJSSsmPD9vdY0hfjSWWHZBgnRai1aQcvUe
      2YtgkG3KdP8IencRaGrS3I5bHAeabQ4I776T7UipDPFzNVh9PBIRvb3DjDqVfssp
      mBA5IrxacQGfB2YmKwegp9OTBfUFkvtEQOlzeXfqeuc34cp408GYXSvp2DsnxKFh
      ofbwygraae3VZqnZxp3vBpz3lBoyPnBEN1djJkxpU1fF_
    • CloudFront-Signature: lrMJG0L6zJilpSdGEnWug0zskcMDD5VVGdSRu~P4OOVp5z8iCMBRBO040wxunEQQ
      p0wVHdaCaTFAVqiM0jC0AApW7HigXjx57nsoEVslOzRuoX9S3g-
      FCePWYNPyZcJ95~6SoGdNRHz-
      b-ZZxk4ERhjzXCNIEE8Gt5uImitcPN3HRlhXexS39g30~5OEv~4FHzXfOOIhQ5X5
      cGUD0SpdYviBE~vkAJCk3TbQMXxlJbyX4OOL5xvVPv5WjzMKTuvEk6Z9m6iFBbZc
      8ov4qS60th0tZ1RM4pNMoIWKoqPS3~UPGISe6Xa8Kq-
      tQPxHlfGciI5uhoTp4b7lPussh52QaN__

(Note: These are not real values, you’ll need to look up your own.)

Next is to add a custom imagery provider for the Strava Global Heatmap to JOSM using a URL that you build with these cookie values:

  1. EditPreferences… (F9)Imagery preferences
  2. Create a new TMS string by replacing the [] sections in the following string with the cookie values from above, without the []s:
    • tms[15]:https://heatmap-external-{switch:a,b,c}.strava.com/tiles-auth/all/hot/{zoom}/{x}/{y}.png?Key-Pair-Id=[CloudFront-Key-Pair-ID]&Policy=[CloudFront-Policy]&Signature=[CloudFront-Signature]
  3. Add a new TMS entry by clicking the +TMS button.
  4. Paste the string from Step 2 into Box 4, Edit generated TMS URL (optional), and then give it a name in Box 5, such as Strava Global Heatmap High-Res.
  5. Click OK twice to return to the main JOSM window, and then try to use this layer over some existing data, just as you would aerial imagery.

If it doesn’t work, double-check your URL and be sure the entered cookie values are right, including the underscores after CloudFront-Policy and CloudFront-Signature. Also be sure you haven’t logged out of Strava, as this will expire the cookies. (If you make changes to the URL in JOSM you will need to delete the existing imagery layer and then re-add it to have the new URL used.)

These cookie values, in particular CloudFront-Signature, will occasionally change as cookies are expired and reset or if you log out of Strava. If things were working and then stopped, you may need to get new cookie values from your browser and update the TMS strings.

By default the TMS URL we started with shows the default heatmap, for all activity types, with Hot coloring. Depending on what other data you are working with it may be useful to show just Ride or Run data, perhaps in different colors. In the TMS URL, the first part after tiles-auth is the type of data, and the second is the color. By using this format, replacing [data] and [color], you can create additional heatmap layers:

tms[15]:https://heatmap-external-{switch:a,b,c}.strava.com/tiles-auth/[data]/[color]/{zoom}/{x}/{y}.png?Key-Pair-Id=[CloudFront-Key-Pair-ID]&Policy=[CloudFront-Policy]&Signature=[CloudFront-Signature]

Valid values for type of data: all, ride, run, water, winter.

Valid values for heatmap color: hot, blue, purple, gray, bluered.

Creating multiple layers, riding-only and running-only, of different colors was extremely useful when mapping Avalanche Preserve Recreation Area in Boyne City as cyclists tend to stick to the mountain bike trails and runners to the hiking trails. While I had orthorectified maps from both the city and TOMMBA, the separate riding and running heatmaps made the routes much clearer. In the example image above (link) I have the Ride data as color Hot and the Run data as color Blue; perfect for illustrating the mountain bike vs. hiking trails.

If you’d like to see this area as an example load the are around changeset 85690641 or a bounding box with the following values and see for yourself:

  • Max Lat: 45.2077126
  • Min Lat: 45.1878747
  • Min Lon: -85.0339222
  • Max Lon: -84.9832392
Comments closed

BorgBackup Repository on Synology DSM 6.2.2

(UPDATE: With the release of Synology DSM 7.0 this setup will break. It’s easy to fix and I’ve updated added this post describing how to make this system work under the new version.)

Lately I’ve become enamored with BorgBackup (Borg) for backups of remote *NIX servers, so after acquiring a Synology DS1019+ for home I wanted to make it the destination repository for Borg-based backups of nuxx.net. While setting up Borg is usually quite straightforward (a package or stand-alone binary), it’s not so cut and dry on the Synology DiskStation Manager (DSM); the OS which runs on the DS1019+ and most other Synology NAS’.

What follows here are the steps I used to make and the reason for each step. In the end it was fairly simple, but a few of the steps are obtuse and only relevant to DSM.

These steps were written for DSM 6.2.2; I have not checked to see if it applies to other versions. Also, I leave out all details of setting up public key authentication for SSH as this is thoroughly documented elsewhere.

  1. Enable “User Home Service” via Control PanelUserAdvancedUser HomeEnable user home service: This creates a home directory for each user on the machine and thus a place to store .ssh/authorized_keys for the backup user account.
  2. Create a backup user account and make it part of the administrators group: Accounts must be part of administrators in order to log in via SSH. Starting with DSM 6.2.2 non-admin users do not have SSH access.
  3. Change the permissions on the backup user’s home directory to 755: By default users’ home directories have an ACL applied which has too broad of permissions and SSH will refuse to use the key, instead prompting for a password. Home directories are located under /var/services/homes and this can be set via chmod 755 /var/services/homes/backupuser. (See this thread for details.)
  4. Put ~/.ssh/authorized_keys, containing the remote user’s public key, in place under the backup user’s home directory and ensure that the file is set to 700: If permissions are too open, sshd will refuse to use the key.
  5. Test that you can log in remotely with ssh and public key authentication.
  6. Place the borg-linux64 binary (named borg) in the user’s home directory and confirm that it’s executable: Binaries available here.
  7. Create a directory on the NAS to be used the backup destination and give the backup user read and write permissions.
  8. Modify the backup user’s ~/.ssh/authorized_keys to prevent remote interactive logins and restrict how borg is run: This is optional, but a good idea.

    In this example only the borg serve command (the borg repository server) can be run remotely, is restricted to 120GB of disk, in a repository on DSM under the backup directory of /volume2/Backups/borg, and from remote IP of 192.168.0.23:

    command="/var/services/homes/backupuser/borg serve --storage-quota 120G --restrict-to-repository /volume2/Backups/borg",restrict,from="192.168.0.23" ssh-rsa AAAA[...restofkeygoeshere...] remoteuser@remoteserver.example.com

Please note, there are a number of articles about enabling public key authentication for SSH on DSM which mention uncommenting and setting PubkeyAuthentication yes and AuthorizedKeysFile .ssh/authorized_keys in /etc/ssh/sshd_config and restarting sshd. I did not need to do this. The settings, as commented out, are the defaults and thus already set that way (see sshd_config(5) for details).

At this point DSM should allow a remote user, authenticating with a public key and restricted to a particular source IP address, to use the Synology NAS as a BorgBackup repository. For more information about automating backups check out this article about how I use borg for backing up nuxx.net, including a wrapper script that can be run automatically via cron.

Comments closed

Using borg for backing up nuxx.net

For 10+ years I’ve been backing up the server hosting this website using a combination of rsync and some homegrown scripts to rotate out old backups. This was covered in Time Machine for… FreeBSD? and while it works well and has saved me on some occasions it had downsides.

Notably, the use of rsync’s --link-dest requires a destination filesystem that supports shared inodes, limiting me to Linux/BSD/macOS for backup storage. Also, while the rotation was decent, it wasn’t particularly robust nor fast. Because of how I used rsync to maintain ownership with --numeric-ids it also required more control on the destination than I preferred. And, the backups couldn’t easily be moved between destinations.

In the years since, a few backup packages have come out for doing similar remote backups over ssh, and this weekend I settled on BorgBackup (borg) automated with borgbackup.sh.

Borg and a single wrapper script (that I based on dailybackup.sh) executed from cron simplifies all of this; handling compression, encryption, deduplication, nightly backups, ssh tunneling, and pruning of old backups. More importantly, it removes the dependence on shared inodes for deduplication, eliminating the Linux/BSD/macOS destination requirement. This means my destination could be anything from a Windows box running OpenSSH to a NAS to rsync.net.

Here’s a brief overview of how I have it set up:

  1. Install BorgBackup on both the source and destination (currently Linux nuxx.net server and a Mac at home).
  2. Put a copy of borgbackup.sh on the source, make it executable, and restrict it just to your backup user. Edit the variables near the top as needed for your install, and the paths to back up in the create section.
  3. Create an account on the destination which accepts remote SSH connections via key auth from the backup user on the source.
  4. Restrict the remote ssh connection to running only the borg serve command, to a particular repository, and give it a storage quota:
    restrict,command="borg serve --restrict-to-repository /Volumes/Rusty/borg/servername.nuxx.net” from=“10.0.0.10” ssh-rsa AAAAB3[…snipped…] root@servername.nuxx.net
  5. On the remote server, turn off history (to keep the passphrase from ending up in your history; in bash: set +o history), set the BORG_REPO, BORG_PASSPHRASE, BORG_REMOTE_PATH, and BORG_RSH variables from the top of the script to test what the script defines:
    export BORG_REPO='ssh://user@server.example.com:220/path/to/repo/location/'
    export BORG_PASSPHRASE='PASSPHRASEGOESHERE'
    export BORG_REMOTE_PATH=/usr/local/bin/borg
    export BORG_RSH='ssh -oBatchMode=yes'
  6. From the source, initialize the repo on the destination (this uses the environment variables): borg init --encryption=repokey-blake2
  7. Perform the first backup as a test, backing up just a smallish test directory, such as /var/log: borg create --stats --list ::{hostname}-{now:%Y-%m-%dT%H:%M:%S} /var/log
  8. Remotely list the backups in the repo: borg list
  9. Remotely list the files in the backup: borg list ::backupname
  10. Test restoring a file: borg extract ::backupname var/log/syslog
  11. Back up the encryption key for the repo, just in case: borg key export ssh://user@server.example.com:220/path/to/repo/location/
  12. Define things to exclude in /etc/borg_exclude on the source per the patterns. In my I use shell-style patterns to exclude some cache directories:
    sh:**/cache/*
    sh:**/locks/*
    sh:**/tmp/*
  13. Create the log directory, in the example as /var/log/borg.
  14. Run the backup script. Tail the log file to watch it run, see that it creates a new archive, prunes old backups as needed, etc.
  15. Once this all works, put it in crontab to run every night.
  16. Enjoy your new simpler backups! And read the excellent Borg Documentation on each command to see just how easy restores can be.
Comments closed