Okay, I have now made it to the point where I can put an API call out to salesforce, return the list of all the field names, and I can put those into a data table in backendless. My goal is to be able to make Backendless have the same data structure as my salesforce instance for certain objects. If salesforce has a change, it will flow automatically over so my table schemas are always synced.
Now, what I need to do is make sure my data table stays synced. So, when I bring back the data structure via API from salesforce I need to be able to compare what is currently on Backendless and create, update, and delete the information on Backendless when we find differences.
Obviously, I want to do this as efficiently as possible so what is the best way to deal with large amounts of data and comparing two data sources in Backendless.
In my Backendless table I tried to create a calculated field that is a unique key for the table that combines the object name PLUS the field name. I tried to do an upsert but I think it is not working as expected and I think it may be because I cannot specify my combined field as the primary key for the data table.
I see we have bulk operations available and there is also a map function but I am not sure if those are good tools to use for this task. I am also sure that things will get out of sync between two systems at some point or another and I will need to have the tools to clean things up.
I am assuming it is a good practice not to hit the database every time you go through a loop or something. My understanding it is better to put things into a list and send it all through at once.
I am just looking to know the best practices for this codeless environment without burning through a million API operations needlessly.
I tried to do an upsert but I think it is not working as expected and I think it may be because I cannot specify my combined field as the primary key for the data table
the primary key is always objectId, so you should not create your own unique field, but put the value you need to the objectId, check the following example
Wow, interesting, Thanks! I thought the objectID was a value that was automatically assigned by Backendless. In the UI I am not able to go into that field to change it. Are you saying that I can change it if I override via API and NOT go through the UI?
PS…I also saw article that says how I can retrieve more than 100 records at a time for updating, which is helpful:
The other thing I found when trying to import large Salesforce objects into Backendless was there seemed to be a limit on the number of columns/data size.
I am thinking that instead of trying to map out and extract everything from Salesforce into individual Backendless columns I can just store the entire JSON object in a single column. Then, I can override the object ID you provide with the Salesforce Object Id via the method you described above.
What do you think about that? I just saw a video on the JSON object type you guys made and that could be a possible for dealing with larger data sets, no?