Archives

All posts by kampf

I’d been given an older Netgear wireless DSL/Cable router years ago. This included the box, which was nice, because I planned to sell it for $dollars$ and it usually presents nicer when you have things like that. I naturally forgot about it until it was unearthed months ago. I listed it and waited, as one does.

Eventually someone will come along and want/need the old, outdated thing you have. Or so my experience has taught me. I recently sold a pair of AirPort Extreme routers for OK money, but have also sold other 802.11n routers in the recent past which no one should use.

The WNR2000 isn’t special. It’s one of the upstanding type, so maybe that’s unusual, but it’s a black rectangle with lights on it, so pretty standard in that way. This is the v4, which isn’t important unless you find yourself in the situation I did.

When I was alerted that it had sold, for $dollars$ I figured that it was a good idea to power it on and check to see if I’d reset it. What I was greeted with was a flashing amber light after a few moments. This seemed to be a bad thing, and the NETGEAR%% wireless network that was listed on the back wasn’t showing, nor were the lights on the front that would indicate that wireless or WPS were working. Oh boy.

I seem to have recalled booting it up and trying to update the firmware, but that’s not to be trusted. Maybe this is why I was given the router? Honestly, that didn’t matter. I’d prefer to sell a useful item rather than alert the person who had paid for it that a refund was in order. Some time could be devoted to this, and I am not uncomfortable in these situations when there is a solution to find.

Lots of searches and a few videos later, I was pretty sure that Netgear had a solution. I found that this was either a power issue or corrupted firmware. I was using the beefy 12V 1.0A Netgear adapter that it came with, so that wasn’t it. So, how do I fix this firmware thing? Well, Netgear didn’t make it easy, and as anyone who has ever done any firmware hacking with routers knows: versions matter.

I found an insightful video detailing how to use an old protocol to transfer the firmware to the router using TFTP. I hadn’t used this in a very long time, since the days in school of sending files to Cisco routers. I was pleased to see that usable TFTP programs existed, so I could at least skip the command-line syntax minefield. What was frustrating , however, is that Netgear didn’t seem to make this easy. Sure, the firmware was right there, including many older versions back to the one it shipped with. However, instructions were not clear.

Then I stumbled upon the right search phrase and voila, here we go.

The instructions are pretty clear, except for one step, but otherwise I was pleased at the result. I followed the steps here to send the latest firmware file to the N300 WNR2000v4 and it’s back up and working:

https://kb.netgear.com/000059633/How-do-I-upload-firmware-to-my-NETGEAR-router-using-a-TFTP-client-on-Microsoft-Windows

The process is quick, if you make sure to not PUT before you should. I tried this, and then realized the directions wanted me to restart the router. After following those steps, PUT acted fast and the router restarted. After a few minutes I was greeted with a solid light, and then the wireless and WPS lights. I checked and found that the NETGEAR%% network was available. Going to 192.168.1.1 on the device connected to the router displayed the setup page. Just to be sure, I powered it off, waited, powered it back on and was relieved that it booted again.

Now it’s in the mail, headed to its new owner, thoroughly reset. I have a little more knowledge of how to do a recovery of this nature, and a new tool to try and remember.

Signal To Noise

This one is work related, and involves converting analog media to digital at scale. It’s been going on for many years, and that usually means that things have to change.

We initially used an elgato Video Converter for this task, with macOS, and it worked well for years. However, this was the only use of the iMac and we shifted the unit to a Windows 10 computer. It worked exactly the same and was reliable.

Windows 11 introduced some enhanced security including what they call Core Memory Isolation. This is a good thing, until the software you’re using wants access to that memory area and gets really upset when it doesn’t work. Elgato hadn’t updated the software, which is available on their website, in many years for the Windows users. It has updated it for macOS, but that’s not a path we’re going down.

Core To the Problem

So, the task of figuring out what to replace the elgato converter with was a few quick searches and two items were purchased. One half of this arrangement takes the composite video and stereo audio from the analog source and converts it to HDMI. These units frequently bill themselves as upscalers, taking a 480p signal and bumping it to 720p or 1080p. This is fine, but the video output we want is 640×480.

https://a.co/d/9tRv0uW

The second part is another device that intakes an HDMI signal and splits it into an audio and video stream. It’s very simple, and it works great in theory. There were some hiccups, however, and that’s where it gets interesting.

https://a.co/d/0G3wg0v

We needed to use different software for the video conversion, moving from the elgato software to OBS. It’s well-suited to the task, and with some simple instructions it was ready for others to use. Pick a video and audio source, change the resolution to 640×480, and start recording. Simple. It worked. Until someone compared video done with the new setup to the old one and raised a red flag.

Audio is the culprit here. No issues with the video, but there was something very off about the audio. I initially attributed this to reverb in a large speaking room on the tape in question, but when hearing the original transfer I got quiet. It sounded far better. This wasn’t just a small difference.

Cheap for A Reason

Interleaving is a way to do two things at once. In modern technology it’s most commonly been used visually with older televisions, where they would draw one line on the screen, skip the next, and then reverse the pattern the next time. It even got to the 1080i resolution in the days before LCD TVs were really good. Audio, however, is a format where it may have been used for years but only in the chain of processing. Not audibly.

Research into the HDMI-to-USB converter alerted me to some odd settings in the USB Audio Device that was shown in Windows. It was displayed as having 1 channel at 96khz. This was bad for two reasons: mono isn’t what I expected, and the sample rate was double what it should be. Others had noticed this and it seems that the audio chip on these devices interleaves the audio signal from left to right, meaning that there’s a stereo, 48khz signal in there but you have to process it to get that. This device had no drivers, nor were any seemingly available on looking.

https://a.co/d/7RHZTfC

I did find that some owners of components using this audio chip had written code that could untangle the two channels, and in testing I did find that it worked. However, due to the technical nature of the process I decided to pass on this as a solution. More research and I found that NZXT made a device called the Signal HD60 that not only had a page with specifications on a website, but stated its stereo output at 48khz.

A Fix

It was an easy install, and I tested it briefly and it sounded fine. To me. I asked that it be tested and around a week later I heard back that it sounded exactly the same. Dubious, I asked to be shown the new and old examples. They were right, and I was scratching my chin again.

Remember that there are two devices doing the conversion, one from RCA to HDMI, and the other from HDMI to USB. I’d only replaced one of these, and either or both could be the offending device. I did some quick looking through comments on the Amazon listing for the RCA device and audio was mentioned many times, negatively. Super.

So, now what do I do? Test to see if the NZXT device is truly passing DVD-quality stereo audio through? Test then again the RCA device to make sure that it’s at fault? Replace the RCA unit with something else? Yes?

Testing

I started by checking the old HDMI/USB adapter using an iPad with an HDMI output dongle. Inputting this to OBS resulted in no stereo recording while using a stereo panning video on YouTube. I tinkered with a few other things, but the audio echo and general sound were very familiar. Very bad.

That resolved, it was off to then test the NZXT Signal HD60 device in the exact same way. Similar video, but longer and with more range, performed as I’d hoped. Stereo separation and some of that neat psychoacoustic stuff to go with it. I expected this, so it was time to move on to the RCA/HDMI adapter.

Here’s where I had a short mental block, which lifted when I realized that I didn’t need to get video in to test, only audio. An older iPad, one with a 3.5mm headphone jack, and a cable with stereo RCA ends was all I needed to confirm what I suspected: the RCA/HDMI device had the same problem. It did not.

To my surprise, the RCA/HDMI adapter performed admirably. I wasn’t disappointed, because it meant that the problem was with the software. I asked for the original source material and did some live, monitored testing. I heard what I hoped to, and confirmed with the person responsible for the project that it was good. I then had them sign in to the computer and we repeated the testing on their OBS profile.

Ah Ha!

Some confusion set in when we monitored the audio and it sounded good, but when the recording was played back it had an echo. A quick search let me to a video explaining that it’s best to mute other input sources when recording or streaming from a single device.

Oh! Yeah. The thing I did on my profile but that we didn’t include in the instr for others…oops.

I did some more testing and found that it was indeed the Mic/Aux and Desktop Audio that were contributing to the audio and creating the echo. Muting those is very easy and a thing that I should have included in my instructions from the start.

A Releif

I was pleased that no additional hardware was necessary. Software can be fixed and reconfigured, and guides updated.

So, as a lingering lesson this should be something for people to keep in mind for solutions to problems that seem a bit to cheap compared to the brands with names you hear of.

Things got interesting.

As mentioned in the previous post about WiGLE and the AtomGPS device, I used phones as my only source of gathering with the tiny exception long ago of ministumbler. I have, since that post, become more enamored with the AtomGPS itself especially with the code improvements and community around it. I did want more, and especially other frequencies, so how would I be able to do that?

Another phone, of course! No, really, I had a blind spot for the newer WiFi 6E spectrum, and in discussion this year with El Kentaro it seemed like the Pixel 6 series was a step up from what I was using in several ways. Not only was it a faster device, with custom Google Tensor silicon, tons of RAM, and direct support, but it had the radios I wanted to look into the third spectrum. So, the search began, then stopped, then started, and stopped, and I finally picked one up.

Is it great? Mostly. Is it seeing more networks? Yes. Would I recommend it as a single device? Definitely. Is it the subject of this post?

No.

While having a “main” phone device is a good thing, because it’s self-contained and has many functions apart from WiGLE, there were still other projects out there that piqued my interest. This is a big community, and its members have many ideas, some of which are approachable (AtomGPS) while others are insane and awesome (Lozaning, Mr. Bill, BusySignal, others). I wanted something that could do what a typical phone could do but be dedicated solely to the task. This meant GPS, Bluetooth, 2.4 and 5GHz. The big one that I am always willing to pass on is GSM because, well, who’s counting those?!

Sweet Spot

I’d seen hints of several neat standalone devices, but one kept surfacing as the most interesting option. The JHewitt devices, as they’re known in the community, are compact units based on a PCB designed by the owner of wardriver.uk . This PCB allows for the use of two ESP32 boards, a GPS module, temperature sensor, GSM module, a microSD card interface, and an LCD. This is all housed in a 3D printed enclosure with two or three protruding antennae.

Image is of the rev3 plus the RTLBW16 5GHz module – photograph by Luke Switzer

This unit can then be built around the PCB, using the BOM, and used for wardriving as-is. The larger antennae, at least compared to a typical modern smartphone, should be able to hear those fainter transmissions, but in the basic setup they only listen in on 2.4GHz, BT, and GSM. This was a drawback to me because all modern wireless systems use at least two frequencies, and I wanted more. When I got word that a 5GHz mod with the BW16 was available my ears perked up and I was now very interested.

One of the primary code developers for the AtomGPS Wigler build, Luke Switzer, had shown off some of his builds in the past, but also mentioned that he had some components ready to assemble for eager users. This piqued my interest, and after a few inquiries a build was agreed upon. It turned out that this was the first of this configuration for Luke, and I’m happy to say it was featured on his Twitter account during the assembly and testing. I was very eager to get this custom build and join the crew of those who rock a fully capable wardriving rig that’s self-contained.

I do mean it when I say that this is a standalone unit because it features some incredible software and hardware. The most loved feature of this build is that it has two 18650 batteries powering it, with charging built in, and it’s fully protected from overdraw. I really like the AtomGPS, but powering it from an external battery is a hassle. This community-sourced solution not only means that the 3D-printed case fully supports the battery module, but it’s one cohesive unit that has the feel of a small FRS or HAM radio. This is a good thing, because it’s meatier in the hand while being very balanced.

A side-benefit? The diymore V8 has an external USB-A 5V port for powering other devices. I think that I’m going to use this for powering my AtomGPS once I figure out how to keep it all in one package.

On the software side it’s even more interesting and impressive. The initial setup is all through an onboard web interface where the new user connects to an onboard, ad-hoc AP and web server. It prompts for a known network to connect to for NTP sync and software updates, and a fallback network configuration for use after the initial setup. This works very well and the LCD interface is very good at informing the user about what the unit is doing. This display information includes the IP address, if it’s connecting to the server, and other information prior to the typical status display.

Once set up there are several features that make this almost as good as a phone. When it’s connected to that configured network, prior to fully booting, there’s a 60-second timer before the web server shuts down and scanning begins. This timer reset if a user connects to the displayed IP and makes changes. Some of the neat features also include adding your WiGLE API key for manual uploads, and there’s also an option for it to automatically upload as well! If you want to download the files individually and upload them separately, this is also an option that’s pretty easy to do.

The hardware is brilliant, with several case front options. I chose the more temperature-friendly one with the hexagonal design, favoring airflow over style or weatherproofing.

Using the JHewittrev3 (mod)

Startup from off is a single tap of the right-side button. Booting shows the software version, connecting to NTP, the wardriver.uk server if enabled, and the option to press the power button during boot to reset to initial settings. Following that, and if a known network is found, any pending uploads will take place automatically if configured. Then starts the 60-second countdown for the web configuration, displaying said timer and local IP address. Make sure that you configure the device on a network with peer visibility…ask me how I know.

Finally you will be greeted with the rev3’s dashboard, displaying seen networks, bluetooth, GPS status, date, time, and temperature. For those who can see it unassisted you’ll find that there are no stats akin to the Android WiGLE client where it’s doing database comparisons. We’re just collecting networks and not doing any data analysis here, so you won’t see any statistics for your run…unless you want to fork the code and take that on yourself!

This fully-stocked unit is easy to handle for those with medium to large hands. The grip, as mentioned previously, is like a large FRS or HAM radio. It’s well contoured and light enough that walking with it in hand is easy. I have done this on several walks, antennas horizontal, and not found any fatigue. I did also add a small Scoche magnetic plate to the left side for car mounting. It’s mostly been used in a vent mount and hasn’t overwhelmed the mount or the vent.

Battery life is good, considering you have two big ESP32s and the BW16 onboard with a display and GPS. With what I expect to be a full battery charge I have exhausted the battery over the course of a long day. On my unit there is no way to see or measure the status of the batteries, but I have become more willing to power it down when I’m not moving or in a new area. This increases the number of files, but it is a good idea if you’re away from charging capability.

On that note, this diymore battery pack would be even cooler if it supported hot-swapping of the 18650 batteries, but its protection circuitry doesn’t allow for that. If you pull a battery out and place it back in the holder it will almost certainly not then be drawing current. To be fair, I’ve not tested pulling out one, putting it back in, then taking out the other. However, I’ve read the documentation and user comments about the required power cycle when charging. If you plug the pack in and the unit is on, it will charge and run as expected, but pulling the power will force a power cycle. This could be used strategically with an external battery pack to extend the run, of course, but remember that unplugging it or turning off the pack will restart it.

Performance

So, how well does it perform?

I’ve been using it with the AtomGPS and the two Android phones for a week now and I am impressed. I’m in the habit of uploading the Moto first, then the Pixel, then the rev3. I’ll add the AtomGPS less frequently simply because of the work involved. So far I am impressed!

If you are looking for a BT sponge this is not your device. The Pixel 6 and most phones will do a much better job at finding those than an ESP32 or BW16. As for the other numbers, well, they’re impressive. I’ve been seeing very solid network observations when analyzing the WiGLE upload statistics. The rev3 always has significant new networks to add even after the Moto and Pixel have submitted theirs. The antennas being a higher sensitivity is giving it a huge advantage, and making each trip a little more effective. This is what I wanted, and it’s really boosting my upload totals.

This is a sample of a recent run, with the Moto on the top line, Pixel in the middle, and JHewitt rev3 on the bottom. Those are some impressive numbers!

One of the neat things that having external antennae allow is for bigger, or different designs. Do you have an old AC1300 Nighthawk in a drawer? Grab some of those cool blade antenna and swap them in. Have an old Yagi in a closet, or maybe a magnetic omni? Let’s go for a roadtrip!

Conclusion

If you are new to WiGLE this isn’t the place to start IMO. The numbers-go-up pleasure from the Android app is quite alluring, even years later. Driving around a new neighborhood and visually seeing a change is nice, and you get a better understanding of what’s out there. It will drive you to get more, or better, devices to do one of two things: more input or consistency. Some phones are just bad, while others are mysterously over-performers. However, if you don’t have a phone with you, you’re breaking rule #1: Always Be Wardriving. Having the AtomGPS in a vehicle or backpack is easy. Same with a phone.

One of the most puzzling things about the Apple Silicon MacBook Air in 2024 is that it can only natively support one external display. This even extended to the early 13″ MacBook Pro, which was just an Air with a Touch Bar. I’ve supported many of these devices since their introduction, M1 and M2 chips, 13″ and 15″ sizes. As of this post, there’s only one way to get full support for multiple displays.

Synaptics has a technology called DisplayLink, which they license to many companies from StarTech, Hyperdrive, Pluggable, and more. Download the drivers from their website and it tricks the OS into sharing a virtual display with the adapter, letting a user have multiple displays. It works quite well, but has it’s limitations. One of these popped up recently when upgrading from macOS Ventura (13) to Sonoma (14) and using the Zoom accessibility feature. It’s something that I use often in order to make text easier to read, and is pretty important, so when the display connected to my StarTech DisplayLink adapter stopped zooming I was a little puzzled. But I wasn’t surprised, because Apple has been tinkering and changing things for years and breaking capabilities like this.

In an accidental way I was able to find a solution, however, because it is possible to not only support Zoom on both displays, but unlike in Ventura, each display seems to magnify independently. This is a change from Ventura, where the combined display width and height were treated as a single display. Zooming in on one display or the other would also zoom the other, which wasn’t ideal, but worked.

In switching from a Hyperdrive dock to an older, first-generation 13-port OWC Thunderbolt 3 dock, I was unsure which display was connected to which cable, This dock is very odd in that it’s only display outputs are Thunderbolt 3/USB-C DisplayPort and a miniDP plug. No HDMI or full DP plugs. I was using a miniDP-to-DP cable on Dell displays, but found out that I needed to move it. In this confusion I found that the DisplayLink adapter had grabbed the top bar, making it the primary display. This isn’t the monitor that I would normally use as primary, so I was going to change it but found myself using the zoom motion to make the System Preferences panel larger and had to pause and think about what just happened.

After the Sonoma update, this adapter being set to the secondary display, it no longer zoomed. Instead, it glitched a little bit and stuttered. That it was working was unexpected, and so I checked to see if somehow the issue had been resolved by setting things back up as I had previously. Primary was from the dock, while secondary was from the StarTech adapter. The issue returned. So, I switched it back and the zoom worked as expected and hoped. I thought about using my second display as the primary monitor, and even set it up like that for a few minutes, but thought about the whole situation again.

Why was the DisplayLink adapter working when it was primary? I don’t know, but there must be some kind of priority and feature enabled when it is. Thinking for a moment I then decided that switching the two display inputs couldn’t hurt, so that’s what I did. It worked exactly has I’d hoped, and is very much a solution to the problem I’d has since upgrading to Sonoma.

As mentioned earlier, I can now zoom in on either monitor independently of the other. In this window I’m zoomed in, while the other is zoomed out to the full, native resolution. There’s no choppiness or lag, which I did admittedly remember happening on the DisplayLink adapter under Ventura. This behavior is why I wouldn’t have considered the StarTech USB adapter fit for the primary display. Now, however, it works easily as well as the display connected to the dock.

I had emailed Synaptics earlier to ask if this was a known issue and received a response that it was. The software as of this writing is 1.10.0 Build 117 and was released in October 2023. No beta is available. I have emailed that same contact to inform them that there is a possible resolution for some users and I hope that they will acknowledge this and that it’s already a known fix.

I’ve been part of the WiGLE Project, as a contributor, since 2017. I have a much longer history with the idea and practice of “wardriving” that extends to the early days of wireless internet as a thing. Wardriving was something we would do with a PCMCIA wireless card, an antenna, and a laptop. Or in my case, a portable PDA running Windows Mobile. Most of the data contributed to the greater project is done so with Android phones, which makes sense because they have good antennas, GPS, and respectable processors. Things are getting weird though, and some devices that are neither “computer” nor “phone” are becoming more popular in the fringe. Let’s talk about my step into this new world.

More creative members of the community have latched on to the small, cheap, and powerful ESP32 devices that are increasingly competent and accessible. They’re easy to code for and program, a favorite in DIY projects, and are turning up in some compelling packages. One such project is the M5Stack AtomGPS device. It’s a combination of the M5Stack Atom Lite ESP32 unit, plopped into a saddle that includes SD and GPS, which is a critical part of the WiGLE project.

These are affordable, when they’re available, and cost around $30 USD each. This includes the Atom unit, the GPS saddle, and a USB-C to USB-A cable. Add a 4GB-32GB SD/TF card and you’re good to go for hardware. I obtained mine from Mouser, but other sources include DigiKey. Software, well, that’s a different story. Buckle up!

I do not regularly use ESP32 devices, nor Arduino, but I am loosely familiar with the software IDE. I chose to use Windows for my firmware flashing platform, just as a matter of availability, so here’s where it got interesting.

Note that I am an ESP32 amateur. This guide is merely a verbose interpretation of the README.md. Refer first to that guide, which is updated frequently, and follow this one if you’re looking for another option.

git clone https://github.com/lukeswitz/AtomGPS_wigler.git

Installing Git is easy, especially if you choose the Portable installer. Using the instructions here, or the code above, you can clone the repo. This will place a folder called “AtomGPS_wigler” inside of your Git Portable directory, which is probably in Downloads if you’re using Windows. Why did I prefer the Portable version? It’s quick and easy to use and doesn’t integrate functionality with your computer, making it perfect for a small project like this. The Arduino IDE is also a pretty straightforward install, and it’s an easy interface to use. Once that’s installed there are two very important things to add for the M5Stack Atom specifically: the board and the libraries.

Add the M5Atom libraries, and all of the dependencies, under the Library Manager tab on the left side of the Arduino IDE:

Copy the URL to the library json below, then go back to the Arduino IDE, into File > Preferences, and let’s add this Espressif board library in the “Additional boards manager URLs” area and click the icon to the right, confirm the URL, and click ok.

https://espressif.github.io/arduino-esp32/package_esp32_index.json

Select the Boards Manager tab on the left, search for “esp32” and install the Espressif Systems package.

Now make sure that you have an SD/TF card formatted in FAT32, MBR. 4GB-32GB are tested/supported. The slot is on the back side of the GPS saddle, and has that little click in/out mechanism. Plug the AtomGPS in to your computer and get ready to flash.

Open the repo clone file called “AtomGPS_Wigler.ino” and make sure that the board type is set to M5Stack-ATOM or M5Stack-Core-ESP32. I have both of these here because of mixed success. One should work as expected, while the other may fail.

Restart the Arduino IDE.

Click Upload in the IDE toolbar and wait for the magic to happen…or errors. If things go well you’ll see a white cascade of the flashing process, after which the device will be rebooted. If all goes well, you should see a purple, then green LED flashing on the unit. If it’s red, well, that’s a problem. In my case this meant an older problematic version, which I will cover a little bit later. Future versions fixed this issue and the purple/green is what you’ll be greeted by.

Okay, yes, that’s all it does. It powers on, it gets a GPS fix, and it scans. Lines are being written to the SD card in a unique file per boot. These are compatible with WiGLE and can be uploaded at your convenience or the next time you see that hot shot delivery van driver you gave $20 to put it in the glovebox…

Now the story begins, as does the other way to flash the AtomGPS:

All of this, and my first try at a pair of new AtomGPS units failed. I spent more time than I care to total in troubleshooting why, until a fellow user pointed out that the code developer noted some reports of SD card detection issues in the older 1.3.2 version. This is the one I was trying to use, and instead of making changes to the code to fix SPI, I decided to use esptool, the alternative way to directly flash a .bin file.

There is a driver from FTDI that I can recommend downloading and installing from here and it is referenced from the M5Stack Github. I have found this not to be necessary, however it’s a simple install.

To use esptool I simply opened the Windows Store and installed the latest version of Python available at the time (3.xx). This done, I opened a command prompt and installed esptool with the “pip install esptool” command, which took very little time. In some environments it would be possible to run “esptool.py” if environment paths were set up properly, but instead I used “python -m esptool” with some arguments:

Note: Refer to the README for updated commands

For version 1.3.1 I used the following command with COM3:

python -m esptool --chip esp32 --port [PORT] --baud 115200 write_flash -z --flash_mode dio --flash_freq 80m --flash_size detect 0x10000 AtomGPS_wigler_v1_3_1.bin

For version 1.4 I used the following command with COM3:

python -m esptool -p [YOUR_PORT] -b 1500000 --before default_reset --after hard_reset --chip esp32 write_flash --flash_mode dio --flash_size detect --flash_freq 80m 0x1000 AtomGPS_wigler_v1_4.bootloader.bin 0x8000 AtomGPS_wigler_v1_4.partitions.bin 0x10000 AtomGPS_wigler_v1_4.bin

Note: versions mentioned below have been superseded by 1.4 and above

Make sure that when you’re copying this code that you replace [PORT] with something like COM3 and do not include the brackets in your script. Ask me how I know that it doesn’t work with brackets…

This directly flashes a .bin file, which needs no libraries or dependencies as It’s a full image, and if it writes successfully then the firmware is good. This done, my first of two AtomGPS units was working with a 16GB SD card. I totally expected the second to work just the same, but it was not so easy. Using a 4GB card I was unable to get the second one to work with esptool and version 1.3.2. Frustrated, I flashed 1.3.1, which was in the Git clone folder. No luck, but in some frustration I swapped in a 32GB SD card and it booted up immediately. This was a surprise.

So, what would you do next? Flash 1.3.2 of course. Did it work? No!

So, after a flash back to 1.3.1 and a few minutes of running, I checked the SD card for the AtomGPS csv files which were now present. It’s not a matter of how well it works, at this point, but that it does. The developers and community will add features and improve some of the code, but there’s not a really compelling reason to make sure that the devices are up-to-date. So, in the interest of not spending any more time on these in the immediate future I’ll be content with two of the AtomGPS units running different versions of the software.

Lessons learned? Plenty. I should have checked with the developer to see if there were any known issues. This would have saved me a lot of time, but it wasn’t all in vain. Having more SD cards available is certainly a plus. Knowing that the developer provides older .bin files in the Git clone was a handy thing because I could fall back on older code to avoid new features and potential issues.

Would I recommend an AtomGPS or other dedicated WiGLE device? Yes. These are super handy to put in a vehicle or somewhere that either has a lot of traffic or itself travels a lot. It’s not necessarily a device that needs a file dump and upload daily, or even weekly. I’ll be putting one in each car and uploading the files when I think it might be good. I’ll also keep using a phone too, because you can never have too many antennas…

Thank you to Luke Switzer for the software development, Lozaning for inspiring it, and pejacoby for showing off the project on social media.

A bit of a firmware downgrade story:

Lenovo Thinkpad Universal USB-C dock firmware issue with AMD-based E-series laptops.

Issue arises when users or admins upgrade the dock to the .91 version using Vantage, Dock manager, or manually with a downloaded executable.

I can attest to seeing a working dock have the following issue after the upgrade from a previous version.

The primary issue with the 3.0.91 firmware is that any displays attached to the USB-C dock will fail to work with these AMD systems. In several cases, with multiple generations of the E15 we saw this happen.

Running the updater will give errors, mostly that it can not read the current version of the various hardware revisions, and it will eventually fail.

Other laptops will work with the dock, both Windows and macOS alike. The dock is usable for USB, ethernet, and audio, but video will be non-responsive and not shown as available.

Resolution:

Firmware downgrade!

Using this recent Reddit thread I was able to get the 3.0.80 firmware from Lenovo.

Using a Dell laptop with USB-C I was able to successfully install the previous firmware version and downgrade to 3.0.80. All functions return and the dock is completely usable.

I’ve done this without issue on 4 of the 40AY USB-C docks and all devices which were not previously working are now operating properly.

Update – 01/08/24

Lenovo has since released the 3.0.92 firmware update for these docks and I can confirm that it avoids this issue. One dock that had been downgraded was successfully upgraded from .80 to .92 and all functionality worked. Another was a recent .91 firmware install that I was unaware of, a successful downgrade, and then an upgrade to .92.

Reddit thread:

https://www.reddit.com/r/sysadmin/comments

Where it started

My journey to the current project starts pretty far back, in relative terms, and with a simple setup. I had been using a 24″ iMac as my primary computer, running Windows with BootCamp, and wanted a place to store media for use on the LAN or available elsewhere through OpenVPN or a service you may remember, Splashtop. This system was a quartet of 1.5TB Seagate drives on an Intel DP55WB board with an i5-750, some RAM, and a boot disk. It ran Windows well, and the disks were set up in the BIOS as a RAID5 array, something Intel used to include on workstation boards like that. It was good. It was also overkill, big, a little too noisy, and I wanted something smaller.

My supervisor at the time was looking into a dedicated NAS and we both arrived at the QNAP brand nearly simultaneously. I went with the TS-459Pro-II and he got something similar. It was a two-core Atom CPU with four bays and expandable RAM. Decent web interface, dual LAN adapters, and it was quiet/cool. This was in early 2012, and disks were hard to get due to a severe flood season in Thailand, where many companies were building them. I was able to get four Samsung Spinpoint disks, 2TB each, and assembled everything. It’s been a solid system for streaming, file storage, and some occasional remote use with OpenVPN. I am still using it right now because it’s low-impact and QNAP keeps releasing maintenance firmware updates.

To replace the QNAP, for storage capacity reasons, I looked around for options. A used QNAP four-bay was almost always available locally, but reading reviews of the newer hardware gave me pause. Mentions of backplane failures and other issues kept me from picking up something second-hand. It didn’t help that sellers seemed to think that these 5-10 year old appliances were worth so much. Patience paid off, however, and that convergent interest also came about again with my now colleague.

We both still had our QNAP units running, and he’d also done a rackmount setup also, but was really looking to retire some old, hot, power-hungry enterprise server hardware and consolidate if possible. I’d just joined the UT Home Labs Discord server and seen how much people were talking about NASes, so got a bit enthusiastic about a BYO solution instead of something purpose-built. We settled on unRAID, a solid piece of software that has a generous trial period, and reasonable prices for a lifetime license. I was able to get it set up and use Docker for the first time, finding a bit of a new world in features that a NAS platform could offer.

The hardware for this NAS came from a used U-NAS NSC-400 four-bay unit, which the previous owner sold with power supply, motherboard, CPU, and RAM. It is a Haswell Core i3-4330T, 8GB of RAM, and four HGST BLE 4TB disks. The setup was easy, it runs quieter than the QNAP, and the performance was also a good step up. I run Deluge, pi-hole, Crusader, and a few more Docker instances on it with ease. Barring some puzzling issues with SMB speed on macOS, it’s been very good and worth the 12-disk license cost for future expansion.

I was able to consolidate many volumes of data, with an unspeakable number of duplicates, into this four-disk array, but I found that even with a doubling in capacity I was uncomfortably close to the nearly 12TB size of the array. What then? Add some 8TB disks one at a time and expand the array? I didn’t like that idea because it was committing to base hardware that was already a bit old. What I needed to do was make this a stepping stone to the next long-term solution.

Where it’s going

My first searches for NAS enclosures did surface SilverStone as an option, but what cemented them is a friend’s experience building something similar. He’d chosen the CS series, with an all-in-one AMD-based motherboard sporting 12 SATA ports onboard, with two disk pools for workspace and cold storage. I found it clever, and kept it in mind. Later, finding the DS case design more to my taste, I then naturally kept a DS380B in my eBay watchlist for…a while. Some more research led me to finding that the ASROCK AMD board combo he’d used was, well, more pricey than I was willing to spend.

So, for the third time now my colleague and I were doing something very similar. His initial experiences with unRAID were good, but then fraught with issues when he subsequently tried to add disks, then swap in a RAID/HBA controller, then a new motherboard, etc. He learned a lot, and finally settled on an Intel i9 that became available, 12-disk case, 16-port card, and even a GPU for some Steam gaming in a VM.

The turning point is when I was able to then acquire a mini-ITX ASUS PRIME H310I-PLUS motherboard with an Intel I7-8700 CPU (Coffee Lake 14nm), 8GB RAM (DDR4 2400), and the wheels started turning. The board has a single PCIE slot, but does have a rear-mounted m.2 slot for SATA/NVME disks, and plenty of RAM capacity. Four SATA3 ports, USB3, and a focus on reliability over performance. As of this writing I am going to stick with the DDR4-2400 RAM, but will likely up the game soon to at least 16GB of 2666Mhz.

The hardware

So, here’s how it’s panning out. I did settle on the SilverStone DS380B. This case offers eight 3.5″ externally accessible drive bays, with rails, that can accommodate SATA and SAS disks due to using an integrated backplane. More on that later. There’s additional space behind the vertical drive cage for 4 2.5″ disks, which is a good place to put some laptop or SSD disks to use as cache or scratch space. They’re out of the way of airflow and cables, so it’s a neat way to get 12 total disks in, or more if your motherboard has interfaces like this ASUS does.

https://www.silverstonetek.com/en/product/info/computer-chassis/DS380/

Setting up the motherboard was simple, as mini-ITX is just four screws. The DS380 drive cages come out easily with a few screws and give unfettered access to the interior. Connecting power, front panel connectors, and fan headers is trivial. I’d asked the UT Home Labs for some advice and was pointed enthusiastically towards the LSI 9207-8i for my disk controller needs. It has two miniSAS connectors for use with up to eight drives, is very affordable at around $30USD, and seems to just work, which I’m all-in on. The case does require a specially adapted cable for use with a card, however, so look into the SilverStone CPS03 cable and make sure you order two. They’re cheap. The HBA card does have some minor clearance issues, so installing it with the full-height bracket does mean that either the DS380’s graphics card support fixture on the drive cage needs to be trimmed, or simply removed entirely. Five screws later the clearance wasn’t a problem.

https://www.tweaktown.com/reviews/4882/lsi_9207_8i_mustang_raid_controller_host_bus_adapter_review/index.html

Getting the system up and running was a little tricky. The onboard SSD still had Windows 10 installed on it, so I was pleased to see it boot successfully on start. I was a little puzzled that the SAS card BIOS didn’t show up on boot, nor did the disks in Windows’ Drive Management. The “Mustang” LSI controller did show up under Storage Controllers, so at least I was confident that it wasn’t a card issue. Some poking around and help from a UT Home Labs admin got the SAS BIOS loading, but disks were not showing up. The trick to getting the card BIOS to show was enabling CSM compatibility in the motherboard’s BIOS instead of insisting on UEFI only. This was a very important step, so if you’re finding that your HBA/SAS card BIOS isn’t displaying before the motherboard’s logo, check for it.

We then connected an SSD to one of the SATA connectors coming from the LSI controller, to test and make sure that the card and cable, which is special to these cases (CPS03), functioned as hoped. The drive showed up on a refresh, so we were confident that things were okay. Some more digging found that the reason was likely some misunderstanding of the backplane ports. The DS380B and other SilverStone products in this range have a fully-integrated backplane PCB set up with one or two Molex power inputs, SATA/SAS compatible drive slots, fan headers, and interface two ports for each drive bay. One is labeled SAS, while the other is SATA, numbered 0-7. I’d naturally plugged the CPS03 SATA-connector type cables into the labeled SAS ports, but this wasn’t proper procedure. It’s apparently possible to use fewer than eight ports from the card with 8 disks through the use of the labeled SAS ports, somehow using the topology of the backplane and SATA cables as jumpers.

After moving the cables to the labeled SATA ports from the SAS ports on the backplane, and a full power cycle, the LSI card was happily seeing all eight of the 8TB SAS HGST disks I’d procured for use in this system. Yes, that’s right, another doubling in drive capacity and also now in number. QUAD DAMAGE!

Disks online, now it was time to test these used units. It was a great opportunity to use a tool that I’d only recently become aware of, but is fit for this purpose. Ventoy allows you to have a single USB-stick, with as many disk images as you can put on it, bootable to each image on the USB drive using a menu. I first downloaded the latest DVD image of Knoppix, which didn’t load, but then grabbed Ubuntu. After successfully booting to Live, I loaded up GParted, erased all of the volumes on each disk, and then was advised to run badblocks on each disk. With the i7-8700 having a plethora of cores, I started up eight terminal windows, each running badblocks on a disk. It’s not processor intensive, but It’s nice to know that this new box will have more horsepower than I will likely need, while also being somewhat efficient in power, all the while holding a total of 6 cores/12 threads at the ready if needed. A sizeable step up from the Haswell i3 and completely eclipsing the lowly D525 Atom in the QNAP appliance.

The power supply is the last item to be added, and the form factor is a little unusual, but not what I’d call specialized. SFX is a common Small Form size, used in systems that need good power capabilities but using less volume than a traditional ATX footprint. Think of the ATX size, but on a slimming diet, where most of the extra space is seemingly gained in the vertical dimension. There are 1000W variants of this specification, so output power isn’t lacking in availability, that’s for certain. I was able to acquire a SilverStone 500w SFX-L unit, with the -L denoting the version that’s 30mm deeper than the standard spec. This did cause me a little grief with the drive cage due to the modular cable output interfering, but I was able to make it work. The advantage of -L seems to be that it can fit the quieter 120mm fan size. The DS380 also officially supports that version.

One of the complaints and concerns about the SilverStone case I’d heard about from my friend is drive cooling. The DS and CS have at least three 120mm fans drawing air in on the side of the case, and out through the back and power supply. The side-mounted fans do blow directly at the large disk cage, but the airflow isn’t obliged to stay the course and flow over the array. Ingenious owners of these cases have developed 3D printed baffles that screw into the fan’s mounting holes to forcefully direct the airflow over the drive cage. This would cause issues with using a large GPU in the case, and very long SAS cards also. Modifying the baffles for the miniSAS connector to clear is something necessary with the LSI card I’m using, as the connectors point towards the front of the case and extend to the fans.

https://www.printables.com/model/100276-cooling-duct-for-silverstone-d380b/files

Migration

The real challenge is going to be migrating data, and doing it without upsetting what’s already been configured. Ideally I’d like to maintain what I have, but completely change the hardware. This is a feat that wasn’t possible using a device like the QNAP in traditional RAID5 arrays. Disks in arrays like that don’t need to be the same size, but they must all be at least as large as their companions, if not larger, but any capacity above and beyond the smallest drive size is inaccessible. unRAID and the xfs filesystem are adept at making drive pools from disks of disparate sizes work. It’s clever, and the only real requirement is that your parity drive is at least as large as the biggest in the array.

The plan is this: Slowly and methodically rebuild the existing array one drive at a time. If I’d purchased SATA drives I would have considered doing this in the old enclosure. In some ways I’m glad that this wasn’t an option, as the H81I-PLUS ASUS board has a split SATAII and SATAIII array of ports, with two each. Moving the current drives, with parity drive, to the new system, is the first step. With only one of the new drives as the fifth drive available, unRAID can copy it as a second parity disk for the array. Remember, the parity disk is very important, and it can be larger than the others in the array. This process should take some time, but when it’s done I should then be able to stop the array, pull out the smaller parity disk, and then work on the data disks. Do this three times and it should be a new array with four drives, all 8TB, with double the capacity. Real Ship of Theseus stuff right here.

https://wiki.unraid.net/The_parity_swap_procedure
https://www.youtube.com/watch?v=ADlip32yw8E

With that done, I should be able to return the original disks to it’s former home, where I could rebuild the array if wanted. Using unRAID, I will be able to then add some of the additional drives to the array, with as many as 12 total. I do plan on using either an SATA 2.5″ or the m.2 SATA drive as cache, which I’ve not done as of yet. It may also be prudent to leave one or two of the big disks dormant and spun down, as backups should either a disk fail or my storage needs suddenly change drastically. This flexibility is why unRAID or TrueNAS are really the best choices for users who want to be able to do more with their storage if the need arises.

Will there be a backup? Maybe. Services like Backblaze are available, though more expensive if run on a NAS, but should really be considered as an option. I do also have a QNAP TS-004, which is a USB3 four-disk external enclosure with 4x4TB in it presently. Shucking some larger SATA drives into this appliance would make for a reasonable backup endpoint, so that’s also a possibility.

So, how did it go?

Despite trying to follow instructions on how to copy the existing parity disk to a new one, I failed to get it right. This resulted in a 16-hour rebuild of both the old and new parity disks. I’m not really in a rush, but my understanding is that a copy would have been much faster. Oh well. I even got this wrong the first time around, accidentally assigning a 4TB disk as Parity 1 instead of Parity 2. This meant that I had to assign an 8TB disk as Parity 1 and start the sync over again.

As for the data disks, well, this is the longest part of the process. Each of the data drives needs to be removed from the array, then the array is started, stopped, then started with the replacement disk in place. A drive sync is then run on the new disk, which takes…a long time. Repeat as many times as you have disks. In the end, however, all disks will be replaced and we’ve arrived back at port with what could arguably be called the same ship.

Next will be adding an SSD cache disk, or disks, in the form of Samsung 850 Pro SSDs connected to the motherboard’s SATA port. This will give us a much faster write speed to the array, even though it does risk that data because the cache write isn’t immediate. Mirroring two SSDs, which I’m likely to do, mitigates this risk somewhat.

https://unraid.net/blog/unraid-6-9s-multiple-pools-support

Finally, I need to remove the 4TB drives from the enclosure and swap in the remainder of the 8TB disks. Then it’s a matter of deciding a few things: Do I want multiple parity drives? Do I leave 1-3 drives spun down as backups for capacity or reliability? I most certainly do not need 56TB of space right now, nor will I in years, but it’s there for use if wanted or needed. This is the flexibility built into some of the decisions made for this NAS, namely the case design and number of ports on hand.

Wrap Up

With all of the disks installed, a mirrored SSD cache array, and everything powered up, it really was like walking onto the same boat, but things just feel a little nicer, slicker, and faster. It’s a little noisier than the U-NAS enclosure, but five fans and eight spinning disks will make some sound pressure. Docker images still work, but load and update faster. SMB file listing issues that I had previously experienced are now gone and the SMB read speed is much, much better. Temperatures of the lower four drives, which are unassigned devices, was much higher than it should have been, was mildly concerning. This was especially odd when the top four disks are 5-8C cooler while being blown on by the same sized fans. A day later and the disk temperature did settle into the mid-30s and have stayed there since.

I still smile when I see 12 threads to use, and a late upgrade from 8GB to 32GB RAM (2xDDR4 2400) means that VMs are now likely going to be a thing. I’ve done some compressing of file-dense directories, which contain 10k- to 100k- files, and it’s amazing to watch the cores and threads all be used. Next could be expanding the array with a second parity disk, one or two more data disks, and possibly a cold drive ready for an emergency. This arrangement gives me much more capability, flexibility, and reliability than before.

Lessons Learned

There are a few things I’d change in this build, some big and some small.

First, I’d look for a 120mm SFX power supply to make it fit easier, or source an SFX-L power supply that doesn’t have the power output in the center. Modularity is nice, but it really gets in the way of the drive cage fitting properly.

Second would be an HBA card which had ports going up, relative to the plane of the motherboard, instead of sideways. This would allow one to more easily use the fan baffle and route cables cleaner. It’s likely that there are equivalent cards, or identical cards in every way, but differing in port orientation.

Third would be going SATA and not SAS for disks. SATA are easier to source, cheaper, and they run cooler. There’s an argument that SAS are more durable, but at what cost? Also, read on to find out why SATA can be better.

Fourth would be to buy that ASROCK AMD board with 12 onboard SATA ports, or any other board with similar capabilities meeting the criteria. This would eliminate the need for an HBA card, opening the possibility of running a GPU in the case for VM gaming or other operations. It’s also more power efficient, as HBA cards do use 7-15W in normal operation.

Last Words

TRY it first. It doesn’t matter what hardware you have for CPU, motherboard, and RAM, as long as it meets some kind of minimums for unRAID, or TrueNAS, or whatever. Get 2-4 drives into an enclosure with a USB stick or boot disk. That’s all you need to get a NAS set up. Try it and I think you’ll find that you will next be thinking, as I did, about how to build it better.

Or shoot for the moon!

More images

Core i7 9700 with 8GB RAM on the ASUS PRIME H310I-PLUS motherboard
Back side of the ASUS PRIME H310I-PLUS showing an SATA m.2 250GB WD Blue drive that was included with the board. Don’t @ me about it not being NVME.
Eight 8TB HGST SAS disks. Thanks ElForesto!
Motherboard and SSDs installed. Note the 2.5″ four-disk cage in the top right.
Cables and connectors connected to the motherboard so that the switches and ports work. A little messy, but it’s a tight fit.
9207 HBA card with the GPU support bracket installed on the 3.5″ drive cage. These touch before the cage can be screwed in.
The GPU bracket was removed, improving clearance for the HBA card and also making airflow better over some of these hot SAS disks.
It boots! This testing was done with the HBA card installed and running Windows 10 from the m.2 drive on the motherboard. Stand-in power supply did pretty good during testing.
Badblockssssssss
HBA 9207 as shipped. No updates or changes made after I confirmed that it and unRAID saw the drives.
2.5″ drive cage. I did move the lower SSD to the bottom in order to better facilitate the SATA power connector shape and orientation.
One of the fans gets a goofy face, while the other has to do without. This picture also shows how the HBA card’s port orientation is less than ideal with this case.
The SAS disks lit up LEDs on the backplane, whereas the SATA drives did not. Also, SilverStone shipped these labels with the case, on which I put the drive serial numbers. This is important if you need to swap a disk and are not sure which one needs to come out without doing a full roll call.

Why Pi?

If you’ve not heard of the pi-hole, well, let’s fix that.

Ads are something that browser plugins can fix, but what about mobile apps, or all of the other circumstances where another program can’t block or protect you from intrusion or tracking?

pi-hole was software originally designed to run on the Raspberry Pi, a small, cheap computer that has the power of a mobile phone. This is plenty for doing simple tasks, even when some of them are a decade old.

The idea is this: when your app or site or other software requests data, the pi-hole will look up the domain name and compare it to a list of known ads-serving places. If the name matches, the pi-hole blocks it and nothing happens. If it’s not on the list, it’s resolved properly and transparently.

Ideally this is completely done without the user noticing. DNS servers are one of the most important parts of the internet, and also one of the biggest problems. It’s also a very good place to filter requests for devices on a network and this makes it far more effective than software on a device.

I’ve use pi-hole on a Pi B+, 3, and in a Docker container. The last of which is my current home setup, but we’ll be discussing the use of pi-hole on an older Pi B+. This setup guide was inspired by use in a location that doesn’t have daily use during the year and where little professional work is done. It’s also a place where the network speed is lower than in more suburban areas or cities.

Device and setup

Make sure that your Pi, no matter the generation, has a decent power supply providing at least 1000mA. Insufficient power will affect performance of the Pi, which we don’t need to have.

Make sure you have an SD card of at least 4GB, and preferably larger. The Pi does store logs and this can cause issues when space is a problem. A faster SD card can help, but isn’t critical.

Download the Raspberry Pi Imager from here and make sure that you don’t want any data on the SD card. Start the imager and select the Raspberry Pi OS (other) option, then Raspberry Pi OS Lite (32-bit) from the menu. This will install the Pi OS but not use space or resources for a graphic interface. We’ll be setting up pi-hole on the command line and it’s quick and simple to do without a mouse.

After setting up the SD card with Pi OS, assemble the Pi itself and get it set up with a display, a wired network connection, and a keyboard. Start the Pi by plugging it in, and wait for it to boot. Go through the setup guide and create a user name and password. Select language, and other. When setup is finished, you’ll be presented with a command prompt.

Note that you may have an issue using the Enter key on some keyboards. Use Tab to move the selection box and Space to select. I only had this problem on a B+ using a keyboard designed for macOS during the Pi OS setup. Subsequent uses of the Enter/Return key worked as expected.

At the command prompt, type the following:

sudo apt install git

This will take a few minutes, depending on your Pi. You’ll be prompted to agree to install at least once. Git will allow you to download the pi-hole repository installer from Github with ease. Following the install, type this in at the command prompt to clone the pi-hole repo:

git clone --depth 1 https://github.com/pi-hole/pi-hole.git Pi-hole

change directory using this command:

cd "Pi-hole/automated install/" 

then run the installer with the following command:

sudo ./basic-install.sh 

Follow the setup instructions. This is the basic installer, so it goes without saying that this is the quick and easy configuration for a pi-hole. If you want to make changes later, there are guides for doing so, as all settings are available to be changed either through the web interface or a configuration file.

Pay special attention to the final screen that the pi-hole installer shows. It will tell you very specifically that the IP address on your local network the Pi has been assigned, that it should be set as a reserved address on your router, and give you the password for the web interface. I recommend using a phone to take a photo of this, at least temporarily.

Depending on what kind of network setup you have, the DHCP reservation may be more important than in others. Some IP address assignments expire quickly, but most modern networks do not recycle IP addresses for devices too frequently. This can be an issue if the pi-hole is used in an environment that has a lot of new clients, like a salon, coffee shop, restaurant, or other public space.

Changing your DNS setup on a router is also pretty simple, in most cases, but can have some quirks. In the Ubiquiti Amplifi mesh setup that I have used, it needed a few extra tweaks to get working right. I will recommend that you set the pi-hole’s IP address as the Primary DNS server but that you use a known good and safe DNS server as the Secondary DNS. Don’t use your ISP’s DNS IP unless you trust that they’re not using it against you.

The idea behind not sending all traffic, at all times, through your pi-hole is for DNS reliability. If the Pi has issues, is overwhelmed, or is otherwise unresponsive to a request, a device can use the Secondary DNS address to resolve the request. If you configure your DNS to only fulfill DNS requests from the pi-hole, requests would fail completely if the pi-hole does not respond.

Test it by loading this page on a device and seeing if ads show up. Make sure the device doesn’t have a native ad blocker enabled, of course.

Final Thoughts

A pi-hole is a neat thing to add to your network, especially if you have a device available and the time to set it up. It does take a little skill, but anyone with command line and Linux experience can set it up quickly. If DNS is configured with a fall-back server, there are no risks to your network. Setting up a DNS reservation is also an interesting thing to do if you haven’t done one before.

My hopes for this guide is to have you utilize a device that might otherwise not be using. The benefits outweigh the time spent, and I enjoy looking at the admin panel for the pi-hole to see which services are blocked and how many times. Looking at network clients to see who requests the most allowed and blocked services will also give you some insight into how many “smart” devices on your network call home frequently.

Resources

https://www.raspberrypi.com/software/

https://pi-hole.net

https://docs.pi-hole.net/ftldns/configfile/

https://privacyinternational.org/guide-step/4341/raspberry-pi-setup-and-run-pi-hole

A recent need to have a compact, simple device setup that could connect to a streaming service and play music recently came up. The idea is to have something that could weather a power outage, network drop, or just start working when plugged in with power and ethernet. One station, no web interface, no keyboard or mouse. Just power, network, and audio out.

The Raspberry Pi has been around for over a decade and has all of the necessary components to do this on the standard Pi 1/2/3 series. Pi W and Zero models would require additional hardware to output sound or use a wired connection, but can work.

I chose to try something first on an Pi B+ and later on a Pi 3. The B+ worked really good, especially considering it’s age. I’ve had this unit since 2012 and it’s still competent at doing basic tasks. Playing music while running a web server? Easy.

Volumio is a free, easy-to-install operating system package that suits the Pi well. It’s mostly used by audiophiles who pair a Pi with an external sound card or DAC. It can stream local media, cloud sources, and some streaming services. It will also fit on a somewhat small sized SD card, which is also nice.

Installation is simple from the Volumio site, but version 3 complicates things. Version 2 supports a plugin that makes this task easier. I’ll include instructions for both. Download the latest version of Volumio from the website, or get the final version of Volumio 2 from here: https://updates.volumio.org/pi/volumio/2.917/volumio-2.917-2021-10-06-pi.img.zip

A utility on macOS like Balena Etcher or Win32DiskImager on Windows can get the OS installed in minutes. Once booted, you visit a web page hosted on the Pi running Volumio on the LAN. Usually it will resolve to http://volumio.local but finding a local IP address is more reliable in my experience. YMMV.

Once booted, name the device if you want it to do something else and pick your sound output. This setup will be using the 3.5mm audio jack on the Pi by default, but testing with two USB sound cards has been successful without changing the output after full setup.

Once you’re booted up and running, let’s go find out how to get a stream loaded.

In Version 2, let’s navigate to Plugins, then select the Miscellanea option under Search Plugins, then scroll down to AutoStart. Install it, then under Plugins, Installed plugins, make sure AutoStart is on.

Navigate on the left main menu to Web Radio. Look for the My Web Radio option but don’t click on it. All the way on the right side look for the three vertical dots and click on it, then select Add WebRadio. Name the new entry, and paste in a URL to a .m3u for your favorite streaming source.

Test playing the stream and make sure it works. If you want to use an external USB sound card or DAC, go into Settings and Playback Options, then make sure the Output device is right. If you change it, save it after.

Test again!

Now, with the audio playing from your stream, try restarting the Pi. Do it nice the first time, from the web interface, and make sure it works. If it doesn’t, double-check that AutoStart is enabled. If it does, let it keep going and then pull the power. Mean, yes, but we’re testing here…and see if it boots and plays.

Version 3 doesn’t have a plugin like AutoStart, so we have to get a bit hackish. Buckle up!

First, navigate to the address for your Pi and add a /dev to the end. This gets us into the developer menu where we can do fun things. The only setting we need this time around is SSH, so enable it. Don’t close that tab, because we’ll be back for something else including disabling SSH.

We need to use this neat plugin from Github that a user made. Go to this URL and have a peek around to make sure it’s not going to mine dogecoin on your pi, then either launch PuTTy, Terminal, or an equivalent and SSH to your Pi. “ssh -l volumio ipaddress” and password is “volumio”.

Look to this guide and follow along: https://volumio.github.io/docs/Plugin_System/Plugin_System_Overview 

OR try “git clone https://github.com/lukeIam/volumio_custom_autoplay” to get the repository, then cd volumio_custom_autoplay, and then “volumio plugin install” to get the plugin installed.

Follow the above instructions to add a Web Radio stream and start it playing.

Go back to the web dev interface, as in ip/hostname plus /dev, and copy the whole Player State entry. Remove the /dev and go to the Plugins entry on the left side, go to Installed Plugins, select Custom Autoplay, then Settings. Make sure the plugin is enabled, then paste that stuff into the Command box and click Save. Restart the Pi and see if it works!

The Pi B+ and Pi 3 work great and are very competent devices for this simple task. Imagine a situation where you may need an audio stream, always the same for a while, but that just works. Coffee Shop, garage, lounge, man cave, conference lounge…you get the idea.

Until the past two weeks I’d only owned two Uninterruptible Power Supplies, aka UPSes, but that’s changing a little bit for a few reasons.

My first was a Cyberpower unit purchased in the very early 2000s from one of the Office stores, aka Office Depot, OfficeMax, or Staples. It was 750 or 1000VA and serviced me well for a few years but the battery failed earlier than it should have. I liked the idea of a UPS, not only for it’s power backup but that it conditioned the incoming power and gave a much better and more consistent output than one could expect from a wall socket.

Less than a year later I purchased an APC BX1000, likely also from another Office store deal, and it worked great for much longer than I expected. APC is the name in UPSes around the world and I was bummed when the twin 7.5ah cells died after what was possibly 10 years on the job. At the time the replacement cells were more expensive than I was comfortable spending, so it went into a closet and sat.

Recently, however, a local recommended a site that I’ve now used for very affordable and reliable cells, The 3.5ah and 7.5ah cells are usually about $11 each, and they ship from the US out of Nevada in a USPS Flat Rate box for more than the cost of an individual battery. Batterysharks carries a LOT of batteries, but when replacing the twin cells in my BX1000 I just needed a direct replacement.

Being careful to remove the cells, which hadn’t expanded, leaked, or otherwise deformed, I was able to install the new cells with the OE harness that places one battery, inverted, above the other, as this UPS is a tall tower. Other applications vary, but smaller UPSes usually only require one battery and no harness.

The cells I got from Batterysharks did indeed revive the old BX1000, and I still had the weird USB-to-RJ45 communication cable that it came with. I even had an old version of APC’s PowerChute software, which I knew was compatible with this model and it showed green across the board.

Stoked at this revival for under $40, I remember that a local university frequently has older UPSes that have failed batteries on sale to the public. The cells are usually distended and are always removed prior to sale and recycled. I was able to buy two CS 500, one ES 500, and an ES 350 APC unit for a total of $10 after some haggling. Good thing, as when I tested all four of these with the bad cells from the BX1000, only two of them acted properly. A third gave a weak On Battery light, while the other did nothing, not even an arc when plugging it in.

Knowing that I had a working CS 500 and ES 350, at the very least, two more cells were ordered from Batterysharks. They arrived in a week or so and, just as a final test before sending the two previously dead units to the recycler, I popped the 7.5ah cell in to each of them. I guess they were grumpy before, but now both of them powered on and acted completely normal. Great!

Knowing that I preferred the ES form factor, it being a super-sized power strip instead of a bread loaf, I immediately had a use for both of them. One went to a laptop and PC setup in the basement where I have done some streaming and do a lot of music listening and rating for my channels on SomaFM. The other is now wall mounted in a basement bedroom closet where my NAS, AP/router, and other equipment resides on a shelf. The ES has provisioned holes for vertical and horizontal mounting on a wall, so a few measurements later I had it all up and ready with the ES 500. I’ll be adding the communication cable to this soon and let my NAS monitor it’s status and get alerts for power failures and shutdowns.

In all, the experience with all five of these APC UPSes has been positive. The connectors are robust, the fit is very good with the SigmasTek cells from Batterysharks

Update – 03/27/21

I was woken out of sleep last night by the beeping of a UPS. My ES 500 APC unit, powering a NAS and AP, was doing that intermittent beeping these units do to inform you that something’s wrong. No power outage had occurred. I turned it off, then back on, and it started beeping more rapidly, acting like the battery was bad. I popped the cell into both of my CS 500s, which have been unused so far, and they lit up like Christmas trees also. So, then I swapped the ES 500 for the ES 350 to get things back up and working, while leaving the other cell in a CS 500. It had been making the mid range hum sound like it was charging the cell, so I figured I’d let it. Lo and behold, after about 5 minutes it turned on and is acting normal and sounds like it’s still charging. Perhaps the ES failed because it’s charging circuitry has issues. I’d rather the UPS fail, in this case, than the cell.