An expense splitting hybrid app powered by Natural Language Processing and developed using React Native with Expo
When voice assistants began to emerge in 2011 with the introduction of Siri, no one could have predicted that this novelty would become a driver for tech innovation. The growth in the speech and voice recognition market can be attributed to the rising acceptance of advanced technology together with increasing consumer demand for smart devices. This motivated us to build our app, SplitUp. SplitUp enables you to keep track of your expenses and loans without actually having to save a heap of bills to put this information in the app. It does all this on the go just at the command of your voice.
- Awesome Tech Stack
- Clear, Quantifiable and Testable Hypothesis
- Considerate Pre-requisites
- Endless Scope
React Native(Javascript), Python (NLTK, Flask, Tensorflow, etc), NLP(entity extraction, voice recognition), REST API, firebase
Hypothesis: As a result of the voice recognition feature, SplitUp is faster at expense recording than any other apps currently in the market.
Evaluation: To evaluate this hypothesis, we will conduct a time analysis on users and calculate the average time it takes for a user to add the same information in both of these apps. Computing this average over a sample of users would be an accurate way to test our hypothesis.
Qualtitative evaluation: We will also conduct a feedback survey to record their experiences with our app vs. other similar apps.
This project only requires basic knowledge of various fields described below. One can easily catch up in no time. The database is implemented in firebase which has super easy integration APIs for application creation. We have already created some query functions which can be modified to retrieve different pieces of data from firebase database. Apart from this, having a basic knowledge of NLP concepts like entity extraction, POS tagging is enough to continue this project.
Currently, we are using google keyboard microphone to retrieve text from voice. One major scope for improvement is converting this project from Expo to pure React Native. This will allow you to use several Speech to Text APIs like react native voice, IBM Watson Speech-to-Text and so on.
Information extraction from text is a very popular field in NLP, with new advancements happening everyday. Our software currently handles processing of simple transactions involving a single participant and keyword based expense matching, and has a lot of room for innovation and improvement.
- Install npm
- Install expo
- Download this repository
- Run 'pip install -r requirements.txt' from SplitUp directory.
- Go to SplitUp/app directory and run 'npm install' (This will install all react native dependencies required for the app)
- Create a web app on firebase console and add your app's API credentials in SplitUp/app/firebase/config.js
- On firebase console, go to Project settings -> Service Accounts -> Python SDK and generate a new private key. Rename the downloaded json file to 'config.json' and place it in SplitUp/SplitUpServer directory.
- Start server by going to SplitUp/SplitUpServer and running 'python Server.py'
- Start app by going to SplitUp/app and running 'npm start' or 'expo start'



