Anyone can help/give ideas how to implement row-level locking/row locking in Backendless Database?
I try to look around the forums and found that someone recommended to use Backendless Counters, if the counter is 0, then lock the row, after the operation is successful, then reset the lock so other clients can connect/access the row.
However, there would be a situation where the client is offline and holding the counter (not reset), hence locking the row forever.
Please suggest any workaround quick, need some help here guys!
Thanks.
Usually you need to lock particular row on update operations, when the result of another operation is realted to the current row values.
For such cases we are working on ‘transaction api’, which should be available in a week or a couple of weeks.
Or describe your case more specifically.
Phew, I need it right away but it looks like Backendless Transaction API is not ready yet.
I need to do some updating in a data object, which must be locked only for a client.
For example like placing order, where one or more clients are trying to simultaneously update the stock/reserve of a product.
For now the default isolation level is Repeatable Read. For any operation. So in your case if several users want to update the same row, the requests will be placed in queue according to the time when they came to the server and won’t influence each other.
For the other, more complex, cases, wait the ‘transaction api’.
Is it related with unit of work API? Or is this work with normal read and write?
My use case is that I require to read current/previous value first before applying updates/changes. Prrviously i worked with unit of work API and i dont have access to that “current value” which is disappointing.
In you case, where two separate db operations are used you have to use transactions to ensure atomicity.
Yes, it is Unit-of-Work API.
will the real time connection help you with this?
you can inform other client when the stock have changed.
I like your Unit Of Work API, but unfortunately it did not let me read the previous value of a specific property/attribute within a data object, which has been a huge disappointment. I used Cloud Firestore along with Backendless DB just to leverage their transaction capability (a.k.a using double bookkeeping principle). Cloud Firestore transaction allows me to read previous value/state prior updating/removing a data, hence it’s great.
Sure, but Backendless Realtime Connection does not quite reliable and unpredictable.
I try to log the realtime connection, and after some period the connection suddenly ended and showing up a lot of timeout errors, and then after several minutes it came up again.
In an app with a lot of contention for a data, this will be very bad from user experience perspective. The client a.k.a app/user, may be waiting for an unknown period just try to perform changes to that data.
This should be capably done using Backendless Counters to implement row/object locking, but most people are not expecting the scenario where a client loses their internet connection while owning the lock/counter, hence leaving the row/object locked forever for other client.
What I am thinking through this is to use Backendless Counters, Unit of Work API, and Backendless Real-Time Connection Listener.
- An app/client check if its connected to Real-Time Database (attach a listener).
- If the app/client has active connection, then proceed with obtaining a lock for specific object/row using Backendless Counters. In this scenario, I limit the attempts of obtaining a lock to 10 attempts, in case the app/client does not immediately obtain the lock due to high contention. I was inspired from Cloud Firestore transaction that retries 5 times, at least.
I also created a dummy table called RTLockedData, which contains the objectId of the locked data and who is currently locking it (ownerId).
- Read the data using normal retrieval API. This should be guaranteed consistent due to being locked.
- Perform changes using UnitOfWork API (using Repeatable Read/Serializable transaction level for stronger isolation). This will put a further locking on the object/row. The app/client now can freely put its changes to interested data.
- Done (release the counter and execute the transaction).
For this to work better, I would use Real-Time Events provided from Backendless, which can listen in case a client is disconnected or connected, but I will be only using the disconnected event listener. When a client is disconnected, check if the current client/user/app is holding a data within RTLockedData table. If it does, remove the row, and reset the counter of related data to 0, hence other clients can immediately compete again to obtain the data (this should be done within the attempts limit).
What do you all think? @oleg-vyalyh @mohammad_altoiher @mark-piller
you could simplify it to start and add more operation if needed.
For me i would relay on the real time connection and in case the client is disconnected i first inform the client that he is no longer connected and will lose the lock after a set amount of time unless he is connected again. now in the server side i will start a timer if the client is disconnected and when a some time has passed and he didn’t join i will unlock the object and inform all the users who is connected that the item is free again.
all using the real time connections alone
Thanks for the advice, but I think it does not suffice my use case completely.
What you are suggesting is leasing/leases, not locking/locks.
I don’t want to lock my row for a specific period or range of time, just when the transaction/changes are being performed.
My proposed solution above works very well to help others implement row-level/object/record locking in Backendless. It also handles connection drop/loss well, as well as structuring how to retry the operation.
Thanks all.