We are testing out Backendless and have uploaded a large dataset of 25601 rows, 33 columns. All 33 columns are strings and the column count needs to stay at 33 as the data within each row is all related.
Our macOS app has one viewController, a button and a tableView. When the button is clicked the code queries Backendless for 1182 records asynchronously using nextPageAsync to populate an array of objects. When that’s complete the tableView is populated from the array.
Our setup:
Ping to backendless is: icmp_seq=0 ttl=43 time=49.308 ms
We have a 200M download connection.
macOS 10.11 Swift 2, using async calls
Firebase does the same query with the same dataset in .8 seconds
MongoDB & Couchbase come in at 1 second
app id: 7BAC40E6-4F32-0F1D-FF13-87D6D051EB00
Backendless takes approximately 11 seconds to query and return 1000 records.
Tested in the REST console on the site and while it’s ‘snappier’ it only returns 100 records at a time so there’s no way to compare.
Is that a normal expected time?
We fully understand the need to provide a limited dataset to the user as to not overwhelm them. However, in this use case the data needs to be live filtered in code so the user can examine the filtered data, and performing dozens of queries as the user types is not an option, so 1182 records is appropriate.
Backendless Cloud is limited to 100 objects per response. This can be changed only in Managed Backendless, which is a dedicated installation. The comparison you’re describing does not sound like apples to apples, since you’re making 11 queries in Backendless.
Regards,
Mark
Thank you Mark.
As I mentioned in our use case, we need to have 1182 objects loaded so we can (in code) filter those objects for the user.
Suppose it’s a list of items in a tableView that, as a user types, the list is filtered to fewer objects.
Regardless of comparisons, is 11 seconds to retrieve 1000 objects of the size outlined in the original question about ‘normal’? If it should be considerably faster, we need to inspect our code to see what we did wrong. If that’s about ‘right’ we can move on.
I cannot say if it is “normal”. There are many variables such as if you have established indices, if you have any autoload relations (bad idea).
If you load all the objects sequentially, then with all the time for connection setup/tear down, you end up with about a second (perhaps less) per request, which is reasonable. With the asynchronous retrieval, it can be sped up significantly.