Internal Server Error when Saving data objects in TEXT columns

I am currently receiving the following error message when saving data to the table that has TEXT columns: Internal server error with id 08294239-C40F-282B-FF11-9C79E1951C00.

The test data is setting each of the 10 TEXT columns to the max. data length of 21,000. The length of the formatted text request is 213,153 bytes.

The table contains 10 TEXT columns, 12 INT columns, 14 STRING columns, 1 STRING_ID column, and 2 DATETIME columns.


In another Table, I have only 10 TEXT columns and my test data is also setting all 10 TEXT columns to the max. data length of 21,000. In this setup, the addition/saving of this data works.

Hi Roy! We will add informative error message for this situation. In log file I saw “Row size too large” database exception.
We have several ways how to fix this issue:

  1. try to make some normalization of your data table. move part of these data into another table(or tables)
  2. try to change string columns to text

Regards,
Kate.

Kate,

I did change all string columns to text. I really don’t want to have 10 to 20 separate tables in order to handle this data if possible.

Do you receive errors for saving data after these changes?

When I change the column types from STRING to TEXT types, is when there is an error saving data.

Steps I’ve performed:

  1. delete the Table and re-create the Table, Change all STRING columns to TEXT - still fails.
  2. extremely shorten the data length from 21,000 to 1,000 - still fails.

Hi Roy,

You said you changed STRING columns to TEXT after you recreate the table. Do you create the table by hand or is it created as a result of saving an object there?

Regards,
Mark

I create the table by saving an object and then I manually change the column type from STRING to TEXT.

The problem right not is a single object becomes too large. You need to normalize your data table and make the schema smaller.

I guess I’m out of options and will see about normalization.

Can a larger data type be introduced so that the available types would be - STRING, TEXT, LARGE_TEXT (100KB)?

For large block of data we recommend using file storage. Every file has a URL which can be stored as an object property.

What kind of data are you trying to store? I sense you’re somewhat reluctant to use the file storage. Any particular reason for it?

I don’t have any negative items of avoiding the use of file storage. I plan on using the file storage service and anxiously looking forward to the additional API to list files in directories.

One possible reason why I’m hesitating is that the Database is maintained by Backendless which provides potentially quicker response times, etc. whereas the file service data I assume is captured by your servers an routed to use AWS, which adds another dependancy?

All of Backendless is hosted on AWS, however, we do not use any specialized AWS file storage systems. It is a distributed file system maintained by us (running on the EC2 servers).

Thanks for the help and info.

Question - If I wanted to have additional tables to hold a smaller number of TEXT columns, is there a suggested maximum number of TEXT columns that I should plan on having per table?

Maybe 1 table would have 3 TEXT columns… so in total, I would need 4 separate tables to hold the 10 TEXT columns that I additional was planning on. Is there a maximum request byte count or something I can base my database design on?

    The size of your own data (I mean excluding our system fields, like objectId and so on) should not exceed 8037 bytes; TEXT column takes no more than 788 bytes (even if you have 21000 symbols); in other words, only first 788 bytes are counted.
This means that you can have at most 10 TEXT columns of 21000 symbols in one table, and then you're left with 157 free bytes (so you can store a 157 symbol STRING or TEXT more).

Yes, I did change all columns to TEXT. So you this resulted in a table, having 24 TEXT columns, 12 INT, 1 STRING_ID and 2 DATETIME.

I also has the same Internal Server error as a result when saving to this table.

We will make a change to make the error message more informative, but as Sergey wrote, you cannot have more than 10 TEXT columns in a single object.

So, if I want to have 10 TEXT columns, 12 STRING columns, and 12 INT columns, how much overhead is used per data type?

All this information exchange helps me understand a bit more of the problem.

With the max bytes of data per request being 8037 bytes, how do I computer this per database data type used? My REST request is ~210KB, so this reported max size of 8037 is an internal database index or manager limit?