Loading more than 10 entries

Hello
I have a table A that has around 2000 records of which each record has a 1:1 relations ship with another Table B (think of it like customers and address tables ).
If I use single step retrival then I am only limited to 10 entries and have to go through paging (more complex and takes loooong time to load all)

Is there any way I can load all the customers table along with their addresses in one shot ? I am interested in all customers name and their street names (so specific column from the related table ). I remember i used to be able to do it but I cant exactly recall how.

Thank you

1 Like

Hello,

Are you asking about the primary table records or the related records?

No, only with paging.

Regards,
Mark

I am asking about the primary table (in this case table A ie customers ).

Oooh but that means i have to page 200 times ?

Suggestion: If say get me all customers of that table and I add the following property to retrieve which is [customers.Address.Street number ] , will it still abide by the 10 entries rule ?

Not if you read the documentation :wink: The link is below:

https://backendless.com/docs/rest/data_data_paging.html

The number of records (i.e. pageSize) and what properties are returned are completely separated and do not overlap. You can set the page size to one value and request properties X, Y and Z or set the page size to another value and request a completely different set of properties. The backend will just follow your query instructions.

1 Like

Hi Mark
I am very verstile with backendless :slight_smile: (or at least I see my self as such after using it for multiple clients for years where majority are paying). I was able to always design the backend db to get around the size limitation but it has been a while and now coming back to my gigs and I know so much changed in the last 2 years or so. So please be patient with me .

I know we can set the page size to 100 but I was going with the default size for ease of discussion as scalability will remain an issue.
Even with size of 100 then we are talking at 20 calls which is easily more than 5 seconds of processing and that multiplies heavily as the number of records reach 10k which what I expect .

When you do a a where-clause search then you will get all the records of the table in one shot (unless that was changed ). From my experience, if i say get me all customers that live in Alberta then since this is one table call then I will get all the customers in that table that matches that query (in this example the 2k customers with no paging , correct ?)

The issue is if I need all the these customers along with one attribute from another table.

1 Like

Backendless never returned all records in a single request. If you have 2000 records, yes, it will take 20 requests to get them all.

However, my question is why would you do that? Not a single person can grasp that much data at a glance and not a single UI can display that much data without scrolling.

Hmm okay then I must have assumed this all along or never had a case of more than 200 before.

The case I have is for in memory processing. I get a text from an internal source at a rate of one text every 100 msec and I need to manipulate text around 6 times and for every manipulation, I need to go through all the customers entries to check for a match . I cant do each one of these checks over a server call. This is why I wanted to load them all up so I do all the checks in memory

Hello @snakeeyes

You can write a function that gets the records count and based on this, make the required number of data loading cycles

Codeless example:

Regards