Hello Backendless Support Team,
We are experiencing consistent failures when exporting large tables using the Backendless Console export API (backendless-console-sdk js sdk v2.49.0), and we would appreciate your assistance.
Summary
Our automated backup process exports Backendless data every hour and uploads the resulting ZIP file to a separate storage. The process works correctly for most tables, but two large tables (~2 million records each) consistently fail during export with a MySQL query timeout error.
Error Message
EXPORT Exporting table TABLE_NAME failed Exception:com.mysql.cj.jdbc.exceptions.MySQLQueryInterruptedException: Query execution was interrupted, query_timeout exceeded
com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:118)
com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:114)
com.mysql.cj.jdbc.result.ResultSetImpl.next(ResultSetImpl.java:1833)
jdk.internal.reflect.GeneratedMethodAccessor73.invoke(Unknown Source)
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.base/java.lang.reflect.Method.invoke(Method.java:568)
com.mysql.cj.jdbc.ha.MultiHostConnectionProxy$JdbcInterfaceProxy.invoke(MultiHostConnectionProxy.java:111)
com.mysql.cj.jdbc.ha.FailoverConnectionProxy$FailoverJdbcInterfaceProxy.invoke(FailoverConnectionProxy.java:91)
jdk.proxy2/jdk.proxy2.$Proxy43.next(Unknown Source)
org.apache.commons.dbcp2.DelegatingResultSet.next(DelegatingResultSet.java:1159)
com.backendless.tasks.impex.DataTableExporter.lambda$execute$0(DataTableExporter.java:105)
org.hibernate.jdbc.WorkExecutor.executeReturningWork(WorkExecutor.java:55)
org.hibernate.internal.AbstractSharedSessionContract.lambda$doReturningWork$2(AbstractSharedSessionContract.java:1117)
org.hibernate.engine.jdbc.internal.JdbcCoordinatorImpl.coordinateWork(JdbcCoordinatorImpl.java:308)
org.hibernate.internal.AbstractSharedSessionContract.doWork(AbstractSharedSessionContract.java:1125)
org.hibernate.internal.AbstractSharedSessionContract.doReturningWork(AbstractSharedSessionContract.java:1121)
com.backendless.datamodel.application.dao.AppJpaTransaction.lambda$getExecuteNativePayload$3(AppJpaTransaction.java:117)
com.backendless.datamodel.application.dao.AppJpaTransaction.executeSync(AppJpaTransaction.java:135)
com.backendless.datamodel.application.dao.AppJpaTransaction.executeSync(AppJpaTransaction.java:217)
com.backendless.datamodel.application.dao.AppJpaTransaction.lambda$execute$0(AppJpaTransaction.java:49)
java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1768)
com.backendless.async.BackendlessExecutorService.lambda$execute$1(BackendlessExecutorService.java:153)
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
java.base/java.lang.Thread.run(Thread.java:840)
Environment & Setup
- Export triggered via API :
transfer.startExport - Export configuration :
schemaOnly: falseaddDataTypes: trueappsettings: true- Exporting all tables by ID
- Export location :
/exportdirectory - Tables affected : 2 tables, each ~2 million records
- Other tables : Export successfully
- Frequency : Hourly (AWS Lambda)
- Backendless Console SDK used from Node.js
Observations
- The failure occurs consistently on the same large tables.
- The error indicates the query is being interrupted due to a timeout on the Backendless/MySQL side.
- Retrying does not resolve the issue.
- Smaller tables export without any problems.
Questions
- Is there a row limit or execution timeout for table exports?
- Are there recommended best practices for exporting very large tables (e.g., chunking, pagination, or table-specific exports)?
- Can the query timeout be increased for exports, or can this be configured on your side?
- Is there an alternative or recommended backup strategy for large datasets in Backendless?
If helpful, we can:
- Export tables individually
- Provide application ID and table names privately
- Share logs or timestamps of failed exports
Thank you for your help, we rely on this export process for production backups and would appreciate guidance on how to make it reliable for large datasets.