How to Reduce SD Card Firmware Images Download Size

So I’ve just received a Roseapple Pi board, and I finally managed to download Debian and Android images from Roseapple pi download page. It took me nearly 24 hours to be successful, as the Debian 8.1 image is nearly 2GB large and neither download links from Google Drive nor Baidu were reliable, so I had to try a few times, and after several failed attempt it work (morning is usually better).

One way is to use better servers like Mega, at least in my experience, but another way to reduce download time and possibly bandwidth costs is to provide a smaller image, in this case not a minimal image, but an image with the same exact files and functionalities, but optimized for compression.

I followed three main steps to reduce the firmware size from 2GB to 1.5GB in a computer running Ubuntu 14.04, but other Linux operating systems should also do:

  1. Fill unused space with zeros using sfill (or fstrim)
  2. Remove unallocated and unused space in the SD card image
  3. Use the best compression algorithm possible. Roseapple Pi image was compressed with bzip2, but LZMA tools like 7z offer usually better compression ratio

This can be applied to any firmware, and sfill is usually the most important part.

Let’s install the required tools first:


We’ll now check the current firmware file size, and uncompress it


Good, so the firmware image is 7.4GB, since it’s an SD card image you can check the partitions with fdisk


Normally fdisk will show the different partitions, with a start offset which you can use to mount a loop device, and run sfill. But this image a little different, as it uses GPT. fdisk recommends to use gparted graphical tool, but I’ve found out gdisk is also an option.


That’s great. There are two small partitions in the image, and a larger 6.9 GB with offset 139264. I have mounted it, and filled unused space with zeros once as follows:


The same procedure could be repeated on the other partitions, but since they are small, the gains would be minimal. Time to compress the firmware with 7z with the same options I used to compress a Raspberry Pi minimal image:


After about 20 minutes, the results is that it saved about 500 MB.


Now if we run gparted, we’ll find 328.02 MB unallocated space at the end of the SD card image.
gparted_debian_firmware

Some more simple maths… The end sector of the EXT-4 partition is 14680030, which means the actuall useful size is (14680030 * 512) 7516175360 bytes, but the SD card image is 7860125696 bytes long. Let’s cut the fat further, and compress the image again.


and now let’s see the difference:


Right… the file is indeed smaller, but it only saved a whooping 82,873 bytes, not very worth it, and meaning the unallocated space in that SD card image must have been filled with lots of zeros or other identical bytes.

There are also other tricks to decrease the size such as clearing the cache, running apt-get autoremove, and so on, but this is system specific, and does remove some existing files.

Share this:

Support CNX Software! Donate via cryptocurrencies, become a Patron on Patreon, or purchase goods on Amazon or Aliexpress

ROCK 5 ITX RK3588 mini-ITX motherboard
Subscribe
Notify of
guest
The comment form collects your name, email and content to allow us keep track of the comments placed on the website. Please read and accept our website Terms and Privacy Policy to post a comment.
26 Comments
oldest
newest
zoobab
9 years ago

Those chinese manufacturers should get a VPS somewhere and host those files with a simple lighttpd (or even better, FTP and Rsync). I had the same problem with BananaPi and other chinese guys, they just don’t get it. Pan and Googledrive are free hosting, but they are not really friendly when it comes to command line downloads.

onebir
onebir
9 years ago

Does the effficiency of the compression algo matter much once the empty space is all zeros? Surely it’s just a matter of representing one million zeros as “zero a million times” rather than 00000….

So it seem like sfilling empty space with zeros ought to offer similar space savings, whatever the algo.

(People may want to offer images using common but less efficient compression like zip; it would be a shame if they failed to use CNX’s method because they thought they’d HAVE to use 7zip…)

zoobab
9 years ago

Can you try squashfs?

zoobab
9 years ago

@zoobab
You are right, googledrive download get stuck at 880MB here. Any idea how to use wget -c and with which URL?

zoobab
9 years ago


It seems to request a captcha, which is ugly. I even have problems to read the captcha properly.

pm7
pm7
9 years ago

You should check lrzip. It can compress some types of files very well at cost of very high RAM usage at compression.

Andrew
Andrew
9 years ago

They should distribute by torrent. This is what torrent is really excellent for.

I’m also surprised that the install is 7GB … that must be with every friggin feature enabled!

wget
wget
9 years ago

wget
wget
9 years ago

Once again, before no-check-certificate must be 2*hyphen –
something wrong with this comment system because it always remove one

notzed
notzed
9 years ago

bittorrent would solve the bandwidth / reliability problem somewhat.

Fabry
Fabry
9 years ago

One solution is also to use LZ4 It can compress at very highspeed (on SSD even at 400MB/s) and the receiving user can decompress the archive file even faster (the bottleneck will be only disk speed) Compression rate is much worst than LZMA but you can compress an 8GB file in less than 30 secs with LZ4 on machine with SSD Sata2 (faster with Sata3) where LZMA needs about 20 minutes. The bottleneck for LZ4 is disk speed and for LZMA Cpu/Ram speed. And if you need higher compress ratio you can recompress LZ4 archive with LZMA. This give you… Read more »

Peter
Peter
9 years ago

Instead of using sfill or fstrim or whatever dd-ed file with zeroes makes same thing.

Peter
Peter
9 years ago


Maybe I missed something but actual data I assume it is needed and it stays after sfill or fstrim.
My dd use is like mounting partition and then writing big file with zero’s inside. After dd dies because there is no free space anymore this big file is deleted.

But like I wrote maybe we are talking about two different things?

Peter
Peter
9 years ago

@Jean-Luc Aufranc (CNXSoft) But then we ARE talking about the same thing. Let make practical example. I make SD card and run it. Made few customizations on it. Because of this all the space was used for some temporary files. After I’m done I delete all those files. Files are gone but sectors on card is still occupied with old data. And if I make image from card with dd command and compress it all those unused bytes are still used in image. But if I make big bile with zeroes inside after I removed all the unneeded files and… Read more »

Sander
Sander
9 years ago

I agree with @notzed: use bittorrent as it’s agilent against lossy connections.

Furthermore: I really wonder why Raspi / Raspbian does not provide a 300-500MB base image

Cardy Hamder
Cardy Hamder
9 years ago

Fabry : One solution is also to use LZ4 It can compress at very highspeed (on SSD even at 400MB/s) and the receiving user can decompress the archive file even faster (the bottleneck will be only disk speed) Compression rate is much worst than LZMA but you can compress an 8GB file in less than 30 secs with LZ4 on machine with SSD Sata2 (faster with Sata3) where LZMA needs about 20 minutes. The bottleneck for LZ4 is disk speed and for LZMA Cpu/Ram speed. Combining lz4 with lzma, this is a *great* idea (although lz4 -9 is enough for… Read more »

Igor
9 years ago

All Armbian images have minimum SD card image by default since some time. If someone is interested in used method: https://github.com/igorpecovnik/lib/blob/second/common.sh#L254

Boardcon Rockchip and Allwinner SoM and SBC products