Comparing two large objects or lists and performing CRUD operations

Okay, I have now made it to the point where I can put an API call out to salesforce, return the list of all the field names, and I can put those into a data table in backendless. My goal is to be able to make Backendless have the same data structure as my salesforce instance for certain objects. If salesforce has a change, it will flow automatically over so my table schemas are always synced.

Now, what I need to do is make sure my data table stays synced. So, when I bring back the data structure via API from salesforce I need to be able to compare what is currently on Backendless and create, update, and delete the information on Backendless when we find differences.

Obviously, I want to do this as efficiently as possible so what is the best way to deal with large amounts of data and comparing two data sources in Backendless.

In my Backendless table I tried to create a calculated field that is a unique key for the table that combines the object name PLUS the field name. I tried to do an upsert but I think it is not working as expected and I think it may be because I cannot specify my combined field as the primary key for the data table.

I see we have bulk operations available and there is also a map function but I am not sure if those are good tools to use for this task. I am also sure that things will get out of sync between two systems at some point or another and I will need to have the tools to clean things up.

I am assuming it is a good practice not to hit the database every time you go through a loop or something. My understanding it is better to put things into a list and send it all through at once.

I am just looking to know the best practices for this codeless environment without burning through a million API operations needlessly.

hello @Ryan_Belisle

I tried to do an upsert but I think it is not working as expected and I think it may be because I cannot specify my combined field as the primary key for the data table

the primary key is always objectId, so you should not create your own unique field, but put the value you need to the objectId, check the following example

ksv510@Sergeys-MacBook-Pro ~ % curl -X POST -H 'Content-Type:application/json' 'https://api.backendless.com/23298D6F-EA2F-7BE6-FFBF-7507875E1E00/<you-api-key>/data/Customer' -d '{"objectId": "my-object-id-with-custom-value"}' -i
HTTP/1.1 200 OK
server: nginx
date: Mon, 28 Nov 2022 09:39:59 GMT
content-type: application/json
content-length: 194
access-control-allow-origin: *
access-control-allow-methods: POST, GET, OPTIONS, PUT, DELETE, PATCH
strict-transport-security: max-age=86400

{"file":null,"test":null,"created":1669628399565,"___class":"Customer","ownerId":"00D886EA-E993-43CD-AEF6-104B005929B0","NextInt":null,"updated":null,"objectId":"my-object-id-with-custom-value"}%

So now you will able to use upsert

Wow, interesting, Thanks! I thought the objectID was a value that was automatically assigned by Backendless. In the UI I am not able to go into that field to change it. Are you saying that I can change it if I override via API and NOT go through the UI?

PS…I also saw article that says how I can retrieve more than 100 records at a time for updating, which is helpful:

https://support.backendless.com/docs?topic=13889

The other thing I found when trying to import large Salesforce objects into Backendless was there seemed to be a limit on the number of columns/data size.

I am thinking that instead of trying to map out and extract everything from Salesforce into individual Backendless columns I can just store the entire JSON object in a single column. Then, I can override the object ID you provide with the Salesforce Object Id via the method you described above.

What do you think about that? I just saw a video on the JSON object type you guys made and that could be a possible for dealing with larger data sets, no?

you can’t change objectId via Backendless Console, but you can do it using API

a limit on the number of columns/data size

what exact error have you got?

I can just store the entire JSON object in a single column

it has pros and cons. The main cons is that json has no index. Some details can be found here https://www.slideshare.net/billkarwin/how-to-use-json-in-mysql-wrong?from_action=save If to be short json will be good if you store a small amount of data in the record, and you do not search among a lot of records.