Out of sort memory, consider increasing server sort buffer size
When trying to edit a data object relation directly in the table in backendless I received the above message.
Steps to reproduce
Please describe steps to reproduce starting from the first action.
- Go to table
- try to edit the field with the obeject relation to another table
- the modal window opens where you can edit the relation, but the error shows immediately
I hope you have an idea.
Sorry for the inconvenience. Could you please specify the table name, I have tried several and was not able to reproduce
It is Watchlist - please try to add a share relation
I’ve successfully reproduced the issue you reported. After thorough investigation, I’ve identified two key factors contributing to the problem:
JSON Column Size Limit:
It appears that the JSON columns in your table need to be adjusted. Currently, the size limit for each record is reaching the threshold, and I recommend reducing the size of your JSON columns in the
StockMRQEvaluationData table. Ideally, aim for a maximum of 64 kB per record to ensure optimal performance. In the feature, we will block the applications that exceed the limit.
Indexing for Sorting:
The issue you’re encountering is related to the sorting operation by the
created column. To enhance the sorting efficiency, it’s crucial to add an index for the
created column in the
StockMRQEvaluationData table. This index will significantly improve the query performance and resolve the sorting-related challenges.
By addressing these two aspects, you should observe a noticeable improvement in the system’s behavior and resolve the reported issue effectively.
thank you for the effort to investigate the problem. The indexing did the trick.
But, with regards to the 64 kB threshold and the potential blocking of my application. This would mean a redesign of a major part of the application and the front end. For that reason it would be helpful to identify the most critical columns.
How can I see the consumed size per record?
unfortunately, there is no public tool for measuring the consumed size of a record or a column.
There is a workaround on how to measure a record size. Here is what your can do:
in your terminal run the following command:
curl https://eu-api.backendless.com/<appId>/<apiKey>/data/StockMRQEvaluationData\?offset\=1\&pageSize\=1\&relationsDepth\=0 -i > test.json
(make sure to replace with your appId and apiKey)
And then run:
ls -lah test.json
You will see something like this:
-rw-r–r-- 1 ksv510 staff 203K Feb 14 12:23 test.json
203K is the size of the file with a record inside it.
Hope this is helpful.