Compressed Data Fields

[expired user #5015]'s profile image [expired user #5015] posted 13 years ago in Feature discussion Permalink
I know it's a long shot, but I'd love to see support for compressed data fields in HeidiSQL.

The databases I'm working with store quite a bit of binary data in compressed format. I'm using an application that uses Delphi on the front end, and PHP on the back end. Fortunately, the compression libraries built into Delphi and the gzcompress functions in PHP are compatible, so no matter which language is accessing the data, the compressed data can be handled properly. So we store a lot of binary data in the database in compressed format to save disk space and improve data transfer times.

I know this wouldn't be a feature that would appeal to a wide audience, but it would be easy to implement... a feature to upload and download compressed data in BLOB fields. And optionally view images which have been compressed.

The compressed data can be recognized by its 0x78DA header.

The object used in Delphi is TCompressionStream and TDecompressionStream, both of which are provided with ZLib, which is included with Delphi. The code I'm using to compress and decompress data stored in a string is:

function Compress(StrIn: string; CompLevel: TCompressionLevel=clMax): string;
var
cs: TCompressionStream;
ss2: TStringStream;
begin
ss2:=TStringStream.create('');
cs:=tcompressionstream.Create(complevel, ss2);
try
cs.write(strin[1],length(strin));
cs.Free;
result:=ss2.DataString;
ss2.free;
except
on e: Exception do begin
Result:='';
cs.Free;
ss2.free;
raise;
end;
end;
end;

function Decompress(StrIn: string): string;
const
bufsize = 65536;
var
dcs: TDecompressionStream;
ss1: TStringStream;
br: Integer;
buf: string;
begin
ss1:=tstringstream.Create(StrIn);
dcs:=tdecompressionstream.Create(ss1);
try
Result:='';
repeat
setlength(buf,bufsize);
br:=dcs.read(buf[1],bufsize);
Result:=Result+Copy(buf,1,br);
until br<bufsize;
dcs.Free;
ss1.Free;
except
on e: Exception do begin
Result:='';
dcs.Free;
ss1.Free;
raise;
end;
end;
end;

Thanks for listening!
ansgar's profile image ansgar posted 13 years ago Permalink
Did you know MySQL also has some built in zlib (de)compression? This works on my win32/5.5.8 server:
SELECT COMPRESS(REPEAT('a',1000));
SELECT UNCOMPRESS(COMPRESS(REPEAT('b',1000)));
[expired user #5015]'s profile image [expired user #5015] posted 13 years ago Permalink
Unfortunately this defeats half of the purpose of using compressed storage... reducing the amount of data transferred over the wire. And if I'm using a compressed connection, the data gets decompressed in the query, then recompressed over the connection, seriously slowing things down.

It would be nice if HeidiSQL had support for ZLib compression built right in.
kalvaro's profile image kalvaro posted 13 years ago Permalink
If you want to reduce the amount of data transferred over the wire, you can enable MySQL's native compressed network protocol. In HeidiSQL, it's the "Compressed client/server protocol" checkbox in Session Manager.
[expired user #5015]'s profile image [expired user #5015] posted 13 years ago Permalink
We already have compressed connections enabled. That doesn't do anything for data storage on the server (if we were to store data uncompressed we'd have to double our storage, which being on SSDs, would be prohibitively expensive). And it slows the server down having to deal with all of that compression/decompression.

If we store the data compressed in the database and decompress it using the MySQL UNCOMPRESS function, the server would re-compress it to send it to the client because client compression is turned on. These decompress/compress cycles place a HUGE load on the CPU of the server. And it's unnecessary.

If we store it uncompressed the CPU of the server has to compress it every time someone runs a query against it in order to send it to a client that has connection compression enabled.

If it's stored compressed in the database, it doesn't have to be re-compressed to be sent to a client, because it can't be... it's already as small as it can be.

It makes a lot more sense to store the data compressed in the database in the first place and let the client compress/decompress it. It's much more efficient in terms of both CPU load and storage to handle compression at the client. It's really the best way to handle large amounts of binary data.
ansgar's profile image ansgar posted 13 years ago Permalink
Storing compressed data seems also the most complex way, as every client including webservers+php+whatever has to do the same (de)compression. CPU load is not optimized away, it's just moved to the clients and webservers. In case of images, I'm always for storing JPGs which are decompressed in the browser. Well, all in all that's not an argument against some compression feature in Heidi. But it's also totally unclear how that should work in terms of Heidis data grids and blob editor.
[expired user #5015]'s profile image [expired user #5015] posted 13 years ago Permalink
Decompressing at the client is exactly what we want and are doing now. We don't want the extra load placed on the CPU of the server (it's busy enough as it is). Since my company is in total control of every aspect of how the data is used, it's no problem whatsoever to have PHP, MySQL, a Delphi client, or whatever, process compressed data. We've found its actually much faster to have the client do the decompression since the data is automatically compressed efficiently in transit. The savings in disk space is also nice.

Anyway, what I had envisioned for HeidiSQL is something similar to what it offers now for BLOB fields... a way to upload and download data... in the context menu for a BLOB field offering 'Compress and Upload File to Blob' or 'Download and Compress to File' (when the data starts with 0x78DA) commands.

On a side note, am I missing a command to upload binary data directly into BLOB field in a table? I thought Heidi had one. I see an Insert Files command to do it en masse, but nothing to upload a file into a column (sort of the opposite of the Save Blob to File command).

Please login to leave a reply, or register at first.