Total number of bytes used per record?
When creating a new table and assigning the various data types and lengths, is there a way (for optimisation purposes) to tell the total space (in bytes) that are required for each record. I hope that makes sense.... I'm new to this!
OK, this is how it came about...
I was creating a new table with about 20 columns of all different data types which I was trying to optimise with regard to the storage size for each record. It's the first time I have used a database so am not totally clear on the exactly storage size of each data type. So I thought it would be useful (to me at least) to have an indication of the total storage required for the records in the table (as you add new columns). It would help make sure the records weren't taking up more space than they need to.
Is it clear now what I mean?
I was creating a new table with about 20 columns of all different data types which I was trying to optimise with regard to the storage size for each record. It's the first time I have used a database so am not totally clear on the exactly storage size of each data type. So I thought it would be useful (to me at least) to have an indication of the total storage required for the records in the table (as you add new columns). It would help make sure the records weren't taking up more space than they need to.
Is it clear now what I mean?
This must be the first time I hear database design in such terms... What are you trying to accomplish? Save a string like "Hello" serialized as integer if that saves a byte?
If you could provide a real life example of those optimizations you have in mind, we could probably help you point out the precise misconception.
If you could provide a real life example of those optimizations you have in mind, we could probably help you point out the precise misconception.
for what it´s worth, 11.5. Data Type Storage Requirements contains details.
however, i must side with kalvaro; size-optimizing db record structures by hand in the age of terabyte drives seems to me a strangely retro kind of hobby.
one alternative adhoc strategy that i have used myself would be:
1) make fields quite large;
2) wait until table contains a sizeable amount of data;
3) statistically analyze field usage, and optionally shrink some fields accordingly.
however, i must side with kalvaro; size-optimizing db record structures by hand in the age of terabyte drives seems to me a strangely retro kind of hobby.
one alternative adhoc strategy that i have used myself would be:
1) make fields quite large;
2) wait until table contains a sizeable amount of data;
3) statistically analyze field usage, and optionally shrink some fields accordingly.
Ok, it's beginning to make sense why you guys think my question is so strange. I come from a world (programming microcontrollers) where memory is limited optimisation is important. My mind set it to 'optimise where one can' and 'why use more than you need to'. The records (which contain a mixture of datetime, char, charvar, int and floats) are to hold data from a remote monitoring system I have developed and I hope will grow to be widely used. Each individual system can have 12 sensors which could each be recording 24 records per day so it could (hopefully) end up being a very large amount of data. My thinking is to get it right now rather than have to do it again when I find out my database it taking up a lot of hard drive space on a web server.
Large time based datasets are good candidates for partitioning:
http://dev.mysql.com/doc/refman//5.5/en/partitioning.html
http://dev.mysql.com/doc/refman//5.5/en/partitioning.html
I've noticed that if you want the size of a certain result set you can "export the grid". The Export dialog will inform you of the size of the result. You can see the size of the Selection
or the Complete
size.
Attached a Screenshot to this post to illustrate. See the Row Selection
section.
Please login to leave a reply, or register at first.