Read a file in a backend service

I would like to read some files from the file system in a service I’m building.

The basic plan is to have a service which is delivering a concatenation of a number of files held under a particular path on the storage system. I’d like to be combining and caching the contents of these files.

I have the solution working if it is in the database, but I’m concerned that I may hit maximum text field limits if I just hold the data in there.

I can’t find any kind of API to actually open the files though, which is going to mean I have to pass all of the data back to the client and have it continually reading from these locations - rather than have a timer do all of the work once a minute or so.

Could you let me know if there is a way of accessing these files (should I just http get them?) and also what the storage limits are at present for a TEXT field.

Hi, Michael.
Why can’t you do this in a timer ?
Just use our SDK (for Android or JS) to get files from file service, concatenate their and save result in another file. Path to the new file you can store in the db.
If you use db - the maximum data size for TEXT column is about 21000 chars.

This is running on a backend service. So in your NodeJS framework. I’ve solved it by doing an HTTPS request for it - but FileService doesn’t seem to expose an open or download method.

Michael,

You’re correct, there is no method to download file for the reason that most programming environments provide a way to fetch bytes for a URL. That’s the mechanism that should be used to download file contents.

Regards,
Mark

Ok just it seems like a really slow way of accessing a file in a server process. Going through HTTP etc where normally on the server you’d read a configuration file directly in NodeJS using fs.

This is especially important where I have an unstructured document that is too big to put in a 21k field, but needs to be read quite frequently. It’s also too big to be cached of course.

I guess everything is residing on different boxes etc, so this is the best way through :slight_smile: Either that or I get it rewritten as Javascript and just require it? Obviously that might be cached though.

I believe if you have a 21k configuration file which should be loaded all at the same time, odd are you’re doing something wrong. The best way to access this kind of information is to use a cache - and we have such service. I would suggest you break this config into reasonable parts and use them where you need.

Pretty sure I’m not doing anything wrong :slight_smile:

Just designing as if I had real NodeJS and didn’t need to split something into multiple bits which I will require all of.

I’ve rewritten it to create multiple entries in a data table and then just concatenate them from there. Again feels like it’s not unreasonable for a process to require a 21k definition of what should be done.

For reference I’m investigating whether Backendless can be used to stand up lower cost implementations of my company’s marketing automation system. Presently we use AWS and provision individual clients on there. I’m keen to get this working so we can target smaller organisations and reduce the overhead. To do that I’m implementing a small client on your system.

It’s going pretty well, it’s just that the orchestrated process of an individual can contain more than 21k of definition for all of the steps. It’s not very efficient for me to have to read this definition atomically I guess - I’d just have it in memory on AWS and everything would be instant. I think the approach with splitting it into data entry tables will work for this simple example, but it worries me that this is simple. A real client with a fully implemented system might have a definition that was 250k and was serving 100k - 1m clients per day. The question is can I keep these smaller clients within reasonable processing timeframe because it’s not the number of users that describes the complexity of the system.