By Joy Lal Chattaraj
When I accepted the Internship offer from Inkredo, little did I know that I was going to design and deploy the entire backend for their product. It was the first time I was working with Django. I was supposed to deploy a scalable web server on the cloud for the first time. And it was the first time I was about to do some image processing to recognize texts in images using computer vision. There have been a lot of learning experiences since then. Let’s begin with Django.
I still remember my first day at Inkredo. I was new to Django and as I was fumbling with it like a 5-year-old gifted with a toy that is more suitable to a ten-year-old. However, a challenge is one thing that I have dwelled upon. I had worked on MVC frameworks before, but this was different. We used to create controllers for each model then. I was still in doubt, struggling to manage the current migrations of the app. Now and then a migration attempt would fail on the test server.I found out the actual reason for this was the foreign key dependencies between different apps. It was essential to run them in the order they were generated. But, when you run the migrations of a specific app that is dependent on a different one, it’s highly likely that the database may end up in an inconsistent state. Then Tanmay, the founder, shared this blog,
I saw the same problem there; they too had these issues. I quickly moved all the interrelated models to a single app, and the migrations didn’t trouble me anymore. Here’s what I did.
As the code base grew with each passing day, it became difficult to manage lengthy pieces of code for an intern. This was when I was introduced to the concept of modules, thanks to my colleague Droan who taught me some cool ways to manage a codebase that was growing every day. I broke down the code into several parts, each of which would serve their purpose and import those functions when needed. It was the first time I was managing a large code base, my apps in the past have been quite small because rails used to handle most of the tasks. This time I was designing APIs from scratch with a little help from Django. It’s was too big a responsibility to manage almost all the aspects of the application and get to know every single detail on how things work in the real world because it was all new to me. I had never felt as confident as a developer before.
Before working at Inkredo, I had only read about challenges of working in a startup. Here I had a new problem to solve each day; some were common as I could easily find answers on the internet while the not so common ones took a bit longer to solve. It was the first time I was experiencing what it means to learn on the job. Thanks to stack-overflow, the problems could have taken longer to solve if not for it. I will be sharing some of my important lessons in this article, and hope it may help some of you out there.
One of the problems I faced was uploading files directly to AWS S3 from an API request. The problem was that the file was an “InMemoryUploadedFile” whereas python’s “boto” isn’t well documented to upload such a file object. So, here’s a code snippet that does the task, i.e., get’s a file object from a request and uploads its content to a specific S3 bucket.
Now came the time to deploy my code. I had deployed a few web servers before, but those were for a small audience. Scaling was something I hadn’t done before. Most of these servers were instances of Linux on the cloud with a database installed on it and my application being served using a web-server application.
But this time I had to configure storage buckets for static files, load balancers, a separate database instance and an auto-scaling environment to scale the number of web servers according to the need. Later, I went ahead and deployed a few features like monitoring and alarms for my server instances and a task queue to manage some async tasks. I also had a thought for deploying a caching server, but since our application isn’t read heavy and there are changes to the database in almost every call, deploying one didn’t make sense.
It was a no-brainer to use AWS because I was familiar with it. One of their services, the elastic beanstalk made the deployment process easy, once you configure everything properly, it becomes really easy to deploy the next iterations of your updated application. I used to spend an hour or two every day on the platform. For first few weeks, I played a lot with the infrastructure. Every day I tried to configure certain things to automate the deployment process.
It is necessary to set up billing and budget alerts before you start deploying things. I learned this the hard way when I accidentally deployed a MxLarge RDS instance costing $5/hr, and it ran for the next 48 hours until I say the huge bar on the billings page, but thanks to the awesome support from Amazon, they understood the situation and waived off the bill for that month. ($250/ month is a huge cost for a startup in its early days. I know another startup that bills for $13000/month for a user base of almost 25m). Now we spend less than $10 a month, thanks to the AWS free tier.
The elastic beanstalk console although is quite limited in functionality, but the command line version of it, gives you full control to every resource you are running. You can ssh into an ec2 instance anytime and fix something. Below are some resources and some code snippets I would like to share that helped me a lot during the process of deployment.
Here’s how to add customizations to the apache config files on each ec2 instance.
Advice on deploying an update to your application on beanstalk: never directly ship new code to the production environment (however tested it may be, even when the update method is set to roll out one instance at a time). It is always possible that the code would break due to a certain dependency.
We spin a parallel environment, deploy your new code there and check if everything works as per expectation.
Once you feel everything is right, use the swap url feature on the elastic beanstalk console to move the users to the new environment. Ensure all the traffic has migrated to the new environment before shutting down the older one and always create alarms to keep you updated on any issues.
The motive of the app is to automate the entire loan application and processing. So, this involved a lot of character recognition stuff as the target group possesses its financial documents in hard copies. Enabling them to autofill their information would make the task easier as well as avoid human errors while typing. So, I first designed an OCR using google’s OCR library tesseract. The classifier produced good results when it came to reading standardized documents such as a PAN card. But, as the complexity of the document grew, such as reading a cheque leaf, getting a good accuracy score was difficult. To avoid the complexities of training a custom classifier and deploying it on the cloud (which would require a significant amount of computation) we decided to use Microsoft Azure’s Vision API. It provided us the coordinates of all the texts and all we had to do was look for texts similar to a PAN number or Account number and IFSC from a chequebook. And then I wrote a few regex expressions that made it easy to find strings that were a close match to the results we needed.
Later we extended this to read bank statements; this is where even Azure failed to read everything in the image. We had tried google vision’s API earlier, but the output wasn’t satisfactory. So, decided to work on making the image more readable. I came across a lot of image filters whose main motive was to convert the image to only black and white, no other colors. I tried out a lot of them, some of them were the mean, median and gaussian thresholding. The one which worked best for us was a custom designed filter using Otsu’s Thresholding principle.
Finally, a bit of cropping & rotating the image helped us significantly improve the accuracy. But as we uploaded more documents to this, it failed a lot of times because of the image’s orientations and some text that that showed up other than the actual statement needed (the cropping and rotation part malfunctioned here). Someday I wish to build a custom-designed solution for this with much better accuracy, till then I will be improving my machine learning skills.
For the need of credit, our user’s trusted us in sharing their transactional messages that gave us a closer look into their financial health. I did set up a mechanism for capturing and storing their data securely in our database, but it requires a good amount of data to start training a custom text classifier that would categorize those messages and figure out the amount spent. We are working on building a sentiment analyzer for the same. If you know anyone working on it or interested to work on it, do write to us.
It's been great 8 weeks of learning at Inkredo. The best part is that the team trusted me; It allowed me to play around with the tech and come up with my own solutions. I was involved in every decision making, every new idea is brainstormed before executing. I joined as a backend intern but I ended up doing a lot more.
In a nutshell, these are the products I contributed to:
It felt like I have applied all I had learned over the past years, right from Information Security to Web Development, a bit of DevOps and even Data Analytics. If there is one I still want to improve is writing tests for my code as the functionality coverage of my tests were not enough to ensure something isn’t broken. I hope I will be building some challenging stuff for Inkredo again in future.