Export fails for large tables (~2M rows) with MySQLQueryInterruptedException

Hello Backendless Support Team,

We are experiencing consistent failures when exporting large tables using the Backendless Console export API (backendless-console-sdk js sdk v2.49.0), and we would appreciate your assistance.

Summary

Our automated backup process exports Backendless data every hour and uploads the resulting ZIP file to a separate storage. The process works correctly for most tables, but two large tables (~2 million records each) consistently fail during export with a MySQL query timeout error.

Error Message

EXPORT  Exporting table TABLE_NAME failed Exception:com.mysql.cj.jdbc.exceptions.MySQLQueryInterruptedException: Query execution was interrupted, query_timeout exceeded

		com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:118)
		com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:114)
		com.mysql.cj.jdbc.result.ResultSetImpl.next(ResultSetImpl.java:1833)
		jdk.internal.reflect.GeneratedMethodAccessor73.invoke(Unknown Source)
		java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
		java.base/java.lang.reflect.Method.invoke(Method.java:568)
		com.mysql.cj.jdbc.ha.MultiHostConnectionProxy$JdbcInterfaceProxy.invoke(MultiHostConnectionProxy.java:111)
		com.mysql.cj.jdbc.ha.FailoverConnectionProxy$FailoverJdbcInterfaceProxy.invoke(FailoverConnectionProxy.java:91)
		jdk.proxy2/jdk.proxy2.$Proxy43.next(Unknown Source)
		org.apache.commons.dbcp2.DelegatingResultSet.next(DelegatingResultSet.java:1159)
		com.backendless.tasks.impex.DataTableExporter.lambda$execute$0(DataTableExporter.java:105)
		org.hibernate.jdbc.WorkExecutor.executeReturningWork(WorkExecutor.java:55)
		org.hibernate.internal.AbstractSharedSessionContract.lambda$doReturningWork$2(AbstractSharedSessionContract.java:1117)
		org.hibernate.engine.jdbc.internal.JdbcCoordinatorImpl.coordinateWork(JdbcCoordinatorImpl.java:308)
		org.hibernate.internal.AbstractSharedSessionContract.doWork(AbstractSharedSessionContract.java:1125)
		org.hibernate.internal.AbstractSharedSessionContract.doReturningWork(AbstractSharedSessionContract.java:1121)
		com.backendless.datamodel.application.dao.AppJpaTransaction.lambda$getExecuteNativePayload$3(AppJpaTransaction.java:117)
		com.backendless.datamodel.application.dao.AppJpaTransaction.executeSync(AppJpaTransaction.java:135)
		com.backendless.datamodel.application.dao.AppJpaTransaction.executeSync(AppJpaTransaction.java:217)
		com.backendless.datamodel.application.dao.AppJpaTransaction.lambda$execute$0(AppJpaTransaction.java:49)
		java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1768)
		com.backendless.async.BackendlessExecutorService.lambda$execute$1(BackendlessExecutorService.java:153)
		java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
		java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
		java.base/java.lang.Thread.run(Thread.java:840)

Environment & Setup

  • Export triggered via API : transfer.startExport
  • Export configuration :
    • schemaOnly: false
    • addDataTypes: true
    • appsettings: true
    • Exporting all tables by ID
  • Export location : /export directory
  • Tables affected : 2 tables, each ~2 million records
  • Other tables : Export successfully
  • Frequency : Hourly (AWS Lambda)
  • Backendless Console SDK used from Node.js

Observations

  • The failure occurs consistently on the same large tables.
  • The error indicates the query is being interrupted due to a timeout on the Backendless/MySQL side.
  • Retrying does not resolve the issue.
  • Smaller tables export without any problems.

Questions

  1. Is there a row limit or execution timeout for table exports?
  2. Are there recommended best practices for exporting very large tables (e.g., chunking, pagination, or table-specific exports)?
  3. Can the query timeout be increased for exports, or can this be configured on your side?
  4. Is there an alternative or recommended backup strategy for large datasets in Backendless?

If helpful, we can:

  • Export tables individually
  • Provide application ID and table names privately
  • Share logs or timestamps of failed exports

Thank you for your help, we rely on this export process for production backups and would appreciate guidance on how to make it reliable for large datasets.

Hi Karim,

What is the rationale for doing an hourly backup of a 2m records tables? The amount of compute power, let alone memory that it takes (without any extra cost to you) is quite high. We could possibly increase the timeout, but I believe the frequency is not reasonable.

Mark

Hi Mark,

Thanks for the quick reply.

The main reason we’re doing hourly backups is that we’ve had data loss issues in the past, and this was the safest way we found to avoid that happening again. The backups are mainly for short-term recovery (bad deploys, accidental deletes or updates, background job issues), not for long-term storage. We also rotate them and only keep a small window.

We do understand that exporting tables of this size every hour is heavy, and we’re not trying to push unreasonable load. We’re very open to changing the setup if there’s a better or recommended way to handle large tables in Backendless.

If possible, we’d appreciate guidance on things like:

  • whether large tables should be backed up on a different schedule
  • if exporting tables individually helps
  • or if there’s any incremental or chunked export approach we should be using instead

If increasing the timeout is an option, even temporarily, that would help us keep things stable while we adjust the strategy.

Happy to follow whatever best practice you recommend here. The goal is just to make sure we have reliable recovery points.

Thanks,
Karim

Hi @Karim_Wazzan ,

You can try to export only records changed during last hour. To do that you need to call SDK method for /<appId>/console/data/Users/csv endpoint and provide whereClause with list of properties which you want to be exported.

Later on you will be able to import records back. The only drawback here is that with this method you will not be able to track deleted records.

Regards, Andriy