Hi guys, I’m having a timeout problem with some business logics. They worked fine and I didn’t change anything. Here is a screenshot of the blocks. It’s used to update the ranking of observations based on certain columns.
Hello.
I would advise you to put timestamp logic (records in file or logging) inside you code in order to determine what exactly part of you code cause the problem.
I’ve tested all the parts by taking out some blocks and adding them back in. It looks like the problem is in the end when it saves all the objects back to the data base.
We have no any problems with DB or any other subsystems.
Could you (just for test) put that (presumably problem) piece of code into a separate handler (e.g. beforeSave) and check it work.
And i would recommend to try writing of that objects and their amount with logs. Just for understanding what are you trying to save.
I recommend adding multiple log statements so you can see the timing between the API calls in your logic. If the timeout happens before the last block, it only means that all the allotted time had been spent on other parts of the logic.
I’ve added logs in the beginning and middle, when retrieving all the records (only 131 lines) it took a lot of time, but it’s standard steps so I’m not sure why it took so long. Also does it mean when a script runs more than 5s it just stops because of plan limits?
The majority of time is spent between your debug points 2 and 3. In there you have multiple calls going on, such as retrieving data from the database in a loop and then incrementing a counter for each object. The counter is also an API call. You could add more debug points to understand how much time each step takes.
As for the timeout, yes, the script stops when it takes more than 5 seconds and that’s because of the plan limit.
The general idea of this timer is to update the ranking of all the records in the table League. Is there any other better way to do this? Cuz these blocks are just retrieving all the records and ordering them, and then use the counter to give them a ranking. But obviously it’s taking a lot time, even though I only have 131 records now. It’d take much more time in the future when I have more records I suppose. Can you think of any other way to update the ranking more efficiently? Thanks!!
I have a column of division, and a column of score, first order the records by division asc then by score desc. Then just counter starting from 1 to each record, the first record gets 1, the second gets 2…
How frequently are the objects added to the League table?
Also, suppose we figure out a way to assign the ranking and the data is saved in the database, what do you do with that value then? Determine top 10, top 100?
The ranking should updates itself each time there is a new match. So it might updates several times a day. And it needs to do so right after the match result is submitted (to another table).
In the league table, each record is basically a user profile, so the ranking needs to be there for all of them. Because each logged in user will see his own ranking.
Notice that until logger 3 it’s still within one second, then enter the save objects part it gets very slow even tho just 131 records. Is it normal to be so slow in writing all the records back to the database? I didn’t encounter this error until today, this timer worked fine before with same number of records.
Saving an object takes about 400 milliseconds. When you do it 131 times, clearly it will cause a timeout. You need to rethink your approach to rankings. The current approach is not optimized and will not scale.
I see. In some earlier posts I asked about ideas to make the ranking system work and the answers I got from Backendless staff is to retrieve all the objects and save them back once they’re correctly ranked. Now that it’s not scalable, is there any better solution for this? The ranking system should be something quite normal for apps of sports I suppose
Same issue here, doing something similar… was there any update?
I’d hate to think even on paid plans that a mass update of a hundred records cannot be done. At 400ms a save and a 5-second timeout, you can only update a maximum of 12 records in a single call?
What is the alternative, a hundred API calls to update one record at a time?