I have a pretrained tensorflow.js model that can be loaded and used to predict features.
However, when attempting to upload the custom serverless App on Freshdesk, Freshdesk replies with: "Your app was not published due to some techinical issues. "
I am not too sure what could be causing this error.
For my tensorflow.js model to work, these imports are required:
const tf = require(“@tensorflow/tfjs”);
require(“@tensorflow/tfjs-node”);
const use = require(“@tensorflow-models/universal-sentence-encoder”);
To load a model, you provide an absolute path of the model in the file directory. This is done via:
It is to note however, that I have to keep the model outside of the server folder for the above line of code to detect it.
The app works locally on the localhost. So I am not sure what can be causing the problem. The app can respond to events locally and predict as expected.
Any help would be appreciated. Feel free to ask me for more information if needed
The app deployment process got timed out because of large npm dependencies that needs to be installed. Can you check with some alternative packages that is lightweight? Meanwhile, I will check with my team for other solutions.
Firstly, welcome to the Freshworks developer community.
Adding on to what @Mughela_Chandresh mentioned about large NPM dependencies, the serverless component is designed to keep workloads that perform tasks in a fast and performant way and has some tech dependencies like extended size, and access to the file system.
If we increase support to dependency size, you would still not have access to the file system.
We understand the use case that you are trying to accomplish. And it would be best designed in a way where the serverless component would invoke the ML model with inputs and retrieve the prediction over RESTful APIs.
Note: this solution would make the ML model and its processing, external to the Freshworks developer platform.
Let me know if you would like to brainstorm on ways this could be achieved mindful of the features we offer.