Many inserts in the same transaction cause crash in app

I have an application that works today 100% online, with data queries directly directed to an API itself.
I’m now implementing the ability to save offline queries in SQLite (with [SQLite] (http://ionicframework.com/docs/native/sqlite/)), so you can access them offline. Everything was being implemented perfectly, even trying to save a query that returns a lot of data, about 20,000 records. This generates in my application 20000 inserts to the database, which ended up causing the crash of my application for “Out of memory”, as I could see in the Android console.

I make the 20000 inserts in a single transaction, something like this:

this._http.get(url)
      .map(res => <IndicadoresVendasOffline[]>res.json())
      .catch((err) => {
        console.error(err);
        return Observable.throw(err);
      }).subscribe((result: IndicadoresVendasOffline[]) => {
        console.log(`Result test: ${result.length}`); // print: "Result test: 21657"
        this.sqlDatabase.databaseInstance().then(database => {
          database.transaction((transaction: SQLiteTransaction) => {
            result.forEach(value => {
              transaction.executeSql(`
              INSERT INTO ${IndicadoresVendasOffline.name} (
                  id,
                  emissao,
                  lancamento,
                  previsaoEntrega,
                  cliente,
                  quantidadeTotal,
                  valorLiquido,
                  valorBruto
                ) VALUES (?, ?, ?, ?, ?, ?, ?, ?)
              `,
                [
                  value.id,
                  value.emissao,
                  value.lancamento,
                  value.previsaoEntrega,
                  value.cliente,
                  value.quantidadeTotal,
                  value.valorLiquido,
                  value.valorBruto
                ]
                , () => { 
                  console.log("Ok!");
                 }, this.sqlError
              );
            });
          }).then(() => {
            console.log("Transaction Ok!");
            this.loading = false;
          });
        });
      }, (err) => console.error(err));

How could I have this same behavior without causing “Out of memory” in my app?

Note: I have some limitations, such as: I can not split the request because it must represent the data at the time of the query, and if it splits into 2 or more queries, cross data can occur that does not represent the entire query result in the synchronization.

This is also being discussed in the StackOverflow question.

You might have to split the query. If so, you might have to look into transactional memory, to build an atomic register, so synchrony is maintained. This is a well-studied problem, but it isn’t simple to program. More or less, you attach a time stamp to each piece of the query, and you don’t allow the database to update if it receives any future timestamps, until your desired query completes. (People can read, because that hurts nothing, but nobody can write.)

How would I do that? Any material, tip, or suggestion?

I’ve never implemented it in Javascript. The classic source is Lynch’s Distributed Algorithms. It’s a project. I think you should talk to your employer and change the problem you’re trying to solve. It’s not likely that you really need to synchronize over 20,000 data points. Remember you are writing for a handheld device, not a supercomputer. You might have users who run out of memory after 5,000 data points, or less.

This is a requirement of our customers.
The application is business and is for a restricted audience, where a device with minimal settings will be necessary.
I’ll have to see some alternative way of managing this data synchronization. =(

A quick and dirty method is:

  1. Ask the database for exclusive access
  2. The database acks that you have exclusive access
  3. You do your sync
    3a) All other users get a “try again in 5 minutes” message if they try to change the database
  4. When your sync completes, you release control over the database.

Not sure if this idea will work for your business case, but I’ve seen this implemented for some bigger applications…

The idea is to think of the data to be a file…while the file is being written, no one has access to it, its only when the file is closed that it shows up in the system with the right extension and becomes readable…similarly, get your data in batches and update into your local db, but mark it available only after you receive an end flag or transaction record limit etc…