![](https://static.wixstatic.com/media/b5801e_ca27a41f453b4bc996d4f60c04534598~mv2.jpg/v1/fill/w_216,h_288,al_c,q_80,usm_0.66_1.00_0.01,blur_2,enc_avif,quality_auto/b5801e_ca27a41f453b4bc996d4f60c04534598~mv2.jpg)
MY PORTFOLIO
Hackathon Projects
Project XPRAI
Inspiration
Depression is a major part of society, and our program can hopefully diagnose depression, by using your facial features, and your words
What it does
Primarily uses facial features to determine emotions, and asks you a series of questions to determine your emotional state (happiness, sadness, depression)
How I built it
We used OpenCV to determine the facial features, and the other libraries were used after OpenCV to determine emotional state.
Challenges I ran into
We couldn't load the data that we needed to use for this specific project, so that we could create our lexicon. This took a while to figure out, because the text file with the lexicon was incredibly laggy, but eventually we were able to import the lexicon into our program.
Accomplishments that I'm proud of
Using haarcascades to determine facial features
Incorporating nltk, and openCV together to create something possibly helpful for society
What I learned
We learnt a lot about nltk, and how it can be used in a variety of problems and situations, and it isn't necessarily just linguistics.
What's next for XprAI
Implementing our own TensorFlow ML model to increase accuracy.
Project Specto
About
Today, the technology exists as a support for all people, it has helped make our traffic signals work, allow us to use the internet, and even goes as far as powering our toaster at home. Our inspiration was the supporting role that technology plays in society today. With this project, we plan to expand that support to help a new category of people.
The project has 2 protocols, the blind protocol which is able to help the visually impaired user with visualizing the world around him through his other senses. The user at any time could ask the specto to take a picture. Specto could take a picture and will tell the user all the objects in the area and if there is any text in front of a user, like a book. It would read out the text for the user. Specto also has the ability to alert the user if there is a wall or object in front of the user, to guide the user through the world. Specto's deaf protocol is a web app that is able to detect any audio in the real world and turn it into text for the user to understand. The user could type back and it would turn in to audio for the other person to understand. These two protocols help support both blind and deaf people through their everyday life.
This project has multiple parts and was split up by each of the members in our team. We built the blind protocol using infrared sensors to tell the user when there is an object in front of them. The computing for the blind protocol happens inside the raspberry pi. We used the Pi Camera so when we ask Specto to take a picture it would take it. And the image to text algorithm and the detect multiple objects program would run to insight the user on what is in front of him or her. For the detect multiple objects program we used the Google vision API to make it work, and for the image, to the text, we used a python library known as pytesseract. To take a picture the user says "Specto take a picture" which could be detected using the speech recognition library. For the deaf protocol, we used the speech recognition library again to help deaf people communicate with someone else. The speech recognition detects the other person's voice and turns it into text for the other person to understand. The deaf person could speak back by typing on the web app and using gTTS we turn that text into speech.
Challenges
A few challenges we had was trying to get the service key for the JSON file for the google vision API to run on the python code, and after that, we face a challenge that only was able to detect multiple objects in the demo file, but we just had to change a few minor details to make it work for any image file. For the Image to speech for the blind protocol, we faced a few problems with trying to play an mp3 file using playsound because the pi was not able to find the mp3 files.
What we learned
We learned how to make an organized project with 2 protocols that work together to solve a single problem.
What's Next?
We are in the process of adding a feature for users with hearing disabilities who want to use our platform. Functionality to text back and play responses into audio is coming soon.
Because of social distancing, we had to tackle this problem separately, so the finished form of this project is a bit separate, but after this pandemic, we plan to meet and connect the parts of the project to make it more unified
Additionally, we would like to improve on the Vision API and making the voice to text more reliable.
Project gymshare
Inspiration
Since the start of quarantine for COVID-19, almost all gyms have been shut down due to safety concerns. Gyms are one of the unsafest places during the pandemic because of all the sweat and germs transmitted between people. Before the pandemic hit, most of us were finally building up that daily routine to go to the gym at least 3-4 times a week to hit the grind, but with COVID, everything changed.
Since quarantine in March, all of us have been locked in our rooms and glued to our laptop screens. Over the months, we have all been slacking in our chairs, eating food, and only gaining weight. On top of that, during the first three months, everyone feared even stepping out of their house to check their mail or even go on a short walk.
However, during this time, a small percentage of people took this extra time as an advantage to build a home gym and continue their daily workout just from the comfort of their homes. Even the slightest bit of exercise like running on the treadmill or using the pull-up bar benefits your physical health in so many ways.
It took a couple of months for all of us to realize that instead of spending an extra hour playing Valorant or Rocket League, we could go downstairs to the garage and burn a couple of calories. However, setting up a home gym is quite expensive, and it doesn't have all the facilities an average gym may have. Some people use home gyms for burning off fat through cardio, while others use home gyms to gain muscle fast. Both types of gyms serve the same purpose: burn calories and maintain your physical health, but they differ in the kind of equipment they use.
Ever since our friends started working out, we got motivated to work out, but all the equipment was costly to purchase, so we would go to some of our friends' houses on the weekends for a workout session. Over time, we started going to a couple of other friends' houses who all had different sorts of equipment, and it helped me better work out different muscles.
Sometimes it would have been nice to show up at our friends' house based on our time preference and get to choose whatever equipment we prefer on that day, just like how Airbnb would work. This is where the idea of GymShare comes in.
What it does
Platform
GymShare is a **web application** and an **app** designed to prompt the user to work out in a nearby gym or register your home gym on the platform. It's pretty much like an **Airbnb, but just for gyms**. People can book their gym nearby through the Website or app, based on their preferences for what gym equipment they want to use and how far it is from their house. When the user logins on to the Website, they are prompted to make an account, where they would enter their personal information, like their full name, email, phone number, height, and weight. The machine learning algorithm would calculate the user's best fitness plan and recommend a gym nearby with the necessary equipment based on the information they would upload.
Hardware
We have also built a companion fitness tracker that integrates with our platform. The tracker helps share data to the user on the intensity of their workout and the calories they burned. This data is beneficial to the user, as it allows them to determine what activities help them the most and which gyms they should book.
Our fitness tracker was built using the **Raspberry Pi Zero W** and the **SW-420 motion sensor**; it is programmed to detect how many calories you burned and your workout intensity. This data is sent to a google form which is accessed through flask on the backend of our Website, and it updates based on new data collected from the Hardware.
How we built it
We used several different technologies to put this all together. Utilizing flask, we built a dynamic form that allowed us to collect data from the user to determine the best workout plan and which gyms they should book. Through HTML 5, we built an eye-catching landing page that is easy and simple to use to understand the product, log in to your dashboard, and gather information. We used Google Cloud technologies like firebase, maps API, google drive API for spreadsheets (data collection) and sign-in. We used raspberry pi and python to build the hardware component to the project, a fitness tracker. We also used the Twilio API to send live notifications to the user about information like the gym has been booked. Your workout has started, calorie updates, and post-work-out information. Data such as calories burnt and intensity of training is collected from the raspberry pi and pushed onto a google sheet, and from there is updated on the Website with the latest data on your workout.
Website
For the user data collection portion of our project, we used the **Flask API** to collect user data regarding their body statistics and their objectives when working out to give them the best and most accurate workout plan. They can then choose their appropriate gym based on their needs to achieve their long-desired goals with ease. We used the Google Maps API to display gyms located near the user to find this appropriate gym efficiently and in an organized fashion. Additionally, we took the data collected from the movement sensor to graph the user's intensity and calories exerted to see where they stand during every point of their workout. This only makes it easier for them to pinpoint their strengths and weaknesses, which gives them a more productive activity and a chance to improve.
Hardware
Using a movement sensor (SW-420) for the raspberry pi, we built a fitness tracker similar to a Fitbit. This tracker integrates directly with our platform by sending reminders via Twilio, adding workout data (calories, intensity, workout start/stop), and displaying graphs to help you improve your workout.
Challenges we ran into
We were having issues connecting the Hardware to our platform, but we overcame this by using google drive API to share data over spreadsheets. At the last minute, our code fell apart and began to throw a bunch of errors, so we had to come back and put it all together.
We also encountered some issues regarding inputting data through a flask form and exporting it to a database. Still, we overcame this by looking through the documentation carefully and adding the appropriate APIs through the Google Cloud Platform. We had some trouble integrating the Map, which included gyms in proximity to the user. Still, we overcame this by approaching some mentors from the LA Hacks channel and reading through the documentation and questions similar to those we asked on Stackoverflow. We also had some trouble using Google Firebase to add a Register/Sign-in component to our Website, but we solved this by using Authlib.
Accomplishments that we're proud of
We are proud that we could put together an elaborate project with unique components together in such a short amount of time. We were also delighted to document the entire process thoroughly via devpost and Github readme, something that we often overlook when making hackathon projects. Most of all, we are very proud to have developed this project, which not just benefits our community for their physical well-being, but also **helps to reduce our daily global footprint**. The benefit of a home gym takes away from all the electricity and energy needed to power the gyms.
What we learned
This is one of the most productive hackathons for all members of the team. We learned so many new things that all contributed to building this project. We learned how to design an HTML/CSS/JS-powered website application to fit a unified color palette. We also learned about how we can integrate form data and inputs through a flask function to input them into a database which then goes into a user's profile for their viewing. Additionally, we learned how to integrate data collected from our motion sensor into a beautiful graphic format.
What's next for GymShare
Utilizing GymShares platform and fitness tracker, we plan to add ML to analyze workouts and help. We hope to optimize the user's data to provide them with more fitness information, such as a meal/diet plan that could keep the user more healthy. We hope to update our hardware component with more data collection features, such as a heart rate monitor, specific workout trackers, and step counter, to make our data more accurate. All in all, these additions would further our unique and intelligent approach to fitness with a strong focus on accessibility and ease-of-use.
Project Proboscis
Inspiration
1 million people every year die from malaria 90% of which is in Saharan Africa. Many of these people have inadequate number of doctors to analyze their patients for malaria. Here in Project Proboscis we want to make a change so these people get adequate help.
What it does
Classifies the difference between cells uninfected and parasatized with malaria using Tensorflow. Which is trained by 27,000 images.
How we built it
First we preprocessed the image data, to make it suitable to use for Machine Learning. Then, we used a Convlutional Neural Network to build the AI model,and used that to make the predictions.
Challenges we ran into
We weren't able to figure out how to use the Convolutional Neural Network to predict when given an image, but we figured it out after looking through the internet to find how predictions work.
Accomplishments that we're proud of
We combined server side processing with ML code to make a fully functional classifier, that can also be easily used by other people.
What we learned
We learned about Convolutional Neural Networks, and how to use a model for predicting a specific image. We also learned a lot about server side processing, and tools that we can use such as Heroku, that make it much easier to make WebApps like this.
What's next for Proboscis
Testing Proboscis AI and seeing what the difference can make in malaria infested countries.