Using Tensorflow.js Model with Freshdesk Custom Apps

I have a pretrained tensorflow.js model that can be loaded and used to predict features.
However, when attempting to upload the custom serverless App on Freshdesk, Freshdesk replies with: "Your app was not published due to some techinical issues. "

I am not too sure what could be causing this error.

For my tensorflow.js model to work, these imports are required:

const tf = require(“@tensorflow/tfjs”);
const use = require(“@tensorflow-models/universal-sentence-encoder”);

To load a model, you provide an absolute path of the model in the file directory. This is done via:

const handler =“./modelV1/model.json”);

It is to note however, that I have to keep the model outside of the server folder for the above line of code to detect it.

The app works locally on the localhost. So I am not sure what can be causing the problem. The app can respond to events locally and predict as expected.

Any help would be appreciated. Feel free to ask me for more information if needed :slight_smile:

Hi @Shairyl

Welcome to freshworks developer community.

Please share the app Id so that I can check the logs and let you know what’s causing the issue.


Hi yes, it is #64649 on our sandbox environment

Hi @Shairyl

The app deployment process got timed out because of large npm dependencies that needs to be installed. Can you check with some alternative packages that is lightweight? Meanwhile, I will check with my team for other solutions.


cc : @Raviraj

can you provide some help here?

Could you let me know the limit of the overall size that Freshdesk accepts?

Hey @Shairyl,

Firstly, welcome to the Freshworks developer community. :tada:

Adding on to what @Mughela_Chandresh mentioned about large NPM dependencies, the serverless component is designed to keep workloads that perform tasks in a fast and performant way and has some tech dependencies like extended size, and access to the file system.

If we increase support to dependency size, you would still not have access to the file system.

const handler =“./modelV1/model.json”);

This is not supported.

We understand the use case that you are trying to accomplish. And it would be best designed in a way where the serverless component would invoke the ML model with inputs and retrieve the prediction over RESTful APIs.
Note: this solution would make the ML model and its processing, external to the Freshworks developer platform.

Let me know if you would like to brainstorm on ways this could be achieved mindful of the features we offer.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.