I had a situation with very large, compressible files. These files are huge, but mostly repeated data -- they are test vectors for a piece of hardware. As the traces have grown towards 1TB, reading and writing has got painfully slow. What I need was lightweight transparent compression that works on one directory.
Here is one answer that seems to work: use the 'loopback' device to create a small BTRFS filesystem. BTRFS lzo optimization works very well. The data fit in a fraction of the storage, and reading and writing are much faster.
Here's how to do it.
Create a backing store for the volume. Here it's sized at 50 GB:
fallocate -l 50G BTRFS.BACKING
Make it a loopback device:
losetup /dev/loop0 BTRFS.BACKING
Create a BTRFS filesystem on it:
mkfs.btrfs /dev/loop0
Mount the filesystem, specifying compression:
mount -o compress=lzo /dev/loop0 /mnt
That's it. The filesystem is ready to fill up with redundant data.