Hello, I'm Dhruv Patel
Who am I?
I'm a Graduate Student pursuing my Master in Computer Science in the Viterbi School of Engineering at the University of Southern California. I earned my bachelor's degree from Dharmsinh Desai Institute of Technology in Gujarat, India. I have worked as Research Intern at Indian Institute of Technology, Bombay in the field of Natural Language Processing under Prof. Pushpak Bhattacharyya. My research work includes simplification of complex sentences, morphological analysis, semantic analysis and word embeddings. My passion is to work with extremely talented individuals and collaborate with them to create unique open source products that have an immediate impact. I enjoy solving real-life problems or projects which I want pursue using cutting-edge technology. Currently, I am working with a group of researchers at Information Sciences Institute to study the effects human have on natural resources. Using this factors we have designed a semantic workflow which would help prevent this effects using Artificial Intelligence. Moreover, I have a good experience in Angular 5, Spring MVC, Django, Node.js; Cloud Technologies like Docker, AWS, GCP, Kubernetes; Big Data tools like Apache Spark, Apache Storm, Apache Kafka. I feel that my research experience plus my software development knowledge will add on to any team. I think that the blend of Research and Software is quite unique to find these days and I feel that I have both the attributes in me.
Projects
Arancia - Distributed Key-Value Store
Implemented a Proxy Server in Go to serve authenticated requests to KV Store from multiple clients via one ingress and egress port
Adopted data atomicity for maintaining data consistency and incorporated write-back set assoc cache to reduce read latency
Scaled out Celery with RabbitMQ as the message broker for scheduling regular snapshots of KV Store
Twitter Stream Analysis
Using Storm Topology to generate a list of popular words used in twitter. Ingested data from a Storm spout and a Kafka spout and processed downstream using Storm Bolts. Developed a Word Cloud for analysis.
Recommender System for Movie Ratings
Built a robust recommendation system using user-item based Collaborative filtering.
Using Scala and Apache Spark to handle 30M ratings of MovieLens dataset and to get the RMSE value of as low as 0.91.
Eduauto
Created multiple Web Services with OAuth 2.0 authentication for inserting, retrieving and deleting contents of the database.
Developed the client-side with features such as news feed displaying, record insertion UI, accountancy dashboard and online test simulator.
Smart Paraphraser
Built a feature based model to obtain various characteristics of words such as syllable count, etymology, morphemes and n-Gram for word complexity detection.
Computed the score between a specific word and its synonyms to detect word usage patterns in English grammar.
P2P File Transfer
Designed a real-time browser-to-browser communication system for transferring files from one device to multiple devices without uploading files to the server.
Engineered the solution of asynchronous merging of blobs of same file which resulted in out of sequence data delivery using acknowledgement generation.
Restaurant Automation
Constructed an automated restaurant management system with various features such as table reservation, menu selection, bill payment, memberships, and payment gateway.
Minesweeper-The Game
Programmed the famous minesweeper game and executed modified version of flood fill algorithm to display boxes which do not have mines.
Chatbot with Speech Recognition
Devised a working version of a simple Chatbot with Speech Recognition capabilities using Web Speech API, Twilio and Node Libraries.
Technical Skills
Go (Advanced), Java (Intermediate), Python (Advanced), JavaScript (Advanced), C++, Scala, HTML, CSS, PHP
Django, Flask, Laravel, Spring MVC, Node.js, Polymer, Angular 7, D3.js, REST, Boto3, Redis, RabbitMQ, Elasticsearch, Grafana, Celery, Airflow, Kafka, Storm, MySQL, Postgres SQL, Firebase, LevelDB, Cocoa Pods
AWS (Lambda, Step Functions, S3, IAM, DynamoDB, CloudWatch, API Gateway, SQS, SNS, VPC, EC2, GuardDuty, Inspector, Kinesis Firehose, EKS, ECS, EMR), GCP (Big Query), Terraform, Jenkins, SaltStack, Vagrant, Docker, Prometheus, PagerDuty, Envoy
Linux, Weenix, bpf, netconsd
Work Experience
BlueJeans Network
Software Engineer Intern May '19 -- Aug '19
-
Developed an end-to-end feature to assign Action Items, Decisions and Reviews to specific members present during a meeting by driving the project from conceptualization to testing and execution. (Java, Spring, MongoDB, JUnit, React). [Released on October 1, 2019]
-
Worked closely with the Platform team to build a log aggregation system on AWS using Lambda (written in Python using Boto3), Kinesis Firehose, Elasticsearch, Grafana and S3
-
Engineered a solution to reduce the size of ES Cluster by 50% to provide massive economics on ES by requesting logson-demand
-
Assisted in migrating the L7 Proxy Layer of the BlueJeans App from HAProxy to Envoy for incorporating HTTP/2 and gRPC support (over a period of 3 months performed in various maintenance cycles)
-
Developed a tool to reduce overhead of deploying Lambda Functions on AWS using Flask, Jinja, Terraform, Docker and Jenkins
Information Sciences Institute, Marina Del Rey
Graduate Research Assistant – MINT Project October '18 -- Apr '19
Designed and developed an ETL pipeline to extract data from RDF Triplestore by converting it into JSON and pushing the transformed data into Cassandra.
Built a Model Catalog Explorer to analyze large datasets used for forecasting environmental challenges using Polymer with Apollo Client (for using GraphQL) and D3.js. Used lazy loading, streams and asynchronous programming to process large data.
Contributed in the development of SPARQL Query Manager which is used to execute SPARQL Queries on GRLC and published the package on PyPI under the name "oba-sparql"
Indian Institute of Technology, Bombay
Research & Development Intern at Center for Indian Language Technology (CFILT) Dec '17 -- Apr '18
Designed and developed APIs in Python using Django REST Framework for Question-Answering System, user-base management (with OAuth 2.0 API Security) and Wordnet Visualizer. Used React with Redux to populate the data in frontend.
Built a feature based model using CoreNLP to obtain various characteristics of words such as syllable count, etymology, morphemes and n-Gram for detecting the complexity of a vocabulary. Achieved a baseline kappa score of 0.498 on trial and 0.204 on test sets.