Hello
I get this message "mysql client ran out of memory" when executing a query on a local computer with 16GB memory (win7)and HeidiSQL v16. Is the client referred here to HeidiSQL? Can the memory be extended? I will need to run in the future big data sets. This one is stopping working when using more then about 60.000 rows and 4400 columns in the result of the query.
Any answer with a direction would be appreciated.
Thanks in advance
mysql client ran out of memory
Thanks for fast answer 
Can I use the --quick, -q option with HeidiSQL? Can't get the query past with this --quick option in the Query tab. Thanks in advance.
Im trying to generate analytic data set that has to be organized in this format it's why there is 4400 rows.
According to http://dev.mysql.com/doc/refman/5.0/en/out-of-memory.html it's a problem on client side and by using --quick option the result is send on the flow.
Thanks in advance

Can I use the --quick, -q option with HeidiSQL? Can't get the query past with this --quick option in the Query tab. Thanks in advance.
Im trying to generate analytic data set that has to be organized in this format it's why there is 4400 rows.
According to http://dev.mysql.com/doc/refman/5.0/en/out-of-memory.html it's a problem on client side and by using --quick option the result is send on the flow.
Thanks in advance
Can I change options for Odbc configuration between HeidiSQL and MySQL? The guy below added the Option=1048576 to odbc config
I found this http://www.brettjohnspence.com/2009/12/mysql-client-ran-out-of-memory/
More application have problems with big data and this errer SAS has implementyed same kind of solution for that. http://support.sas.com/kb/31/117.html
I found this http://www.brettjohnspence.com/2009/12/mysql-client-ran-out-of-memory/
More application have problems with big data and this errer SAS has implementyed same kind of solution for that. http://support.sas.com/kb/31/117.html
If someone get's into this problems with out of memory error you will have to start mysql from command line and save the result of the query (from command line) into file with 'INTO OUTFILE' SQL command attached to SELECT statment. Hopefully HeidiSQL will be able to handel larger datasets in the future. 
Thank you to the creator for this nice application.

Thank you to the creator for this nice application.

Yes, the SELECT ... INTO OUTFILE ... syntax is a good choice. Alternatively, you can use mysqldump -hYourHost -uUser -pPass yourdatabase yourtable to create dump file including structure.
HeidiSQL, as a GUI app, will never be able to handle as much data as the command line clients can, as there is quite a bit of overhead in a GUI with such features. But I'm always minimizing memory usage in HeidiSQL where I can, and computers nowadays with a myriad of gigabytes of RAM can handle quite large results.
HeidiSQL, as a GUI app, will never be able to handle as much data as the command line clients can, as there is quite a bit of overhead in a GUI with such features. But I'm always minimizing memory usage in HeidiSQL where I can, and computers nowadays with a myriad of gigabytes of RAM can handle quite large results.
I tried to open the generated file in excel and only 50.000 rows (4400 columns)could be opened (of 75.000)and the excel complained about resources. But it was a 32bit version of excel on 64bit win7, so I tried on a workstation with 24GB RAM and 64-bit excel because I thought it was connected to the 4GB RAM boundary of the 32-bit win systems, however I still got the same message. "Not enough of resources to open the file", so now I'm confused. The csv file has 0,8GB i size.
So it's not the heideSQL issue.
So it's not the heideSQL issue.
Please login to leave a reply, or register at first.