Samihan Nandedkar_
Full Stack Developer | Cloud Architecture | DevOps
I am a Full Stack Developer with a Master's degree in Computer Science from the University of Illinois at Chicago (completed May 2023). With over three years of professional experience, I specialize in building scalable, secure, and high-performance enterprise applications.
Currently, I work at UIC's Technology Solutions where I develop and maintain critical systems serving 7,000+ daily active users. My expertise spans the entire software development lifecycle - from architecting distributed cloud systems on AWS (ECS Fargate, ALB, RDS) to implementing secure CI/CD pipelines and developing comprehensive test automation suites with 1,000+ automated tests.
I'm passionate about backend development, cloud architecture, and DevOps practices. My technical toolkit includes JavaScript/TypeScript, Python, Rust, React.js, Vue.js, Node.js, and extensive experience with AWS and Azure cloud platforms. I've successfully integrated enterprise authentication systems (Azure AD, SSO) securing access for 10,000+ users, built automated data integration pipelines for Salesforce Marketing Cloud, and contributed to standardized design systems that accelerate application development.
I focus on delivering robust, well-tested solutions that prioritize security, scalability, and user experience. Whether it's optimizing performance for peak loads, implementing zero-downtime deployments, or designing intuitive APIs, I strive to create software that makes a real impact.
University of Illinois at Chicago - Technology Solutions
Jun. 2023 - Present
Full Stack Developer
University of Illinois at Chicago - Technology Solutions
Feb. 2022 - May 2023
Graduate Student Developer
Gap Inc. - Infrastructure Department (Compute Unix)
June 2020 - July 2021
Software Engineer
Gap Inc. - Infrastructure Department
May 2019 - July 2019
Software Engineering Intern
University of Illinois at Chicago
Aug. 2021 - May 2023
Master of Science in Computer Science
Essential Coursework: Computer Algorithms, Engineering Distributed Objects in Cloud Computing, Object-Oriented Languages & Environments, Parallel Programming, Data Mining & Text Mining, Natural Language Processing, Software Engineering, Database Systems
Malaviya National Institute of Technology, Jaipur
Aug. 2016 - May 2020
Bachelor of Technology in Electronics and Communication Engineering
Languages: JavaScript (ES6+), TypeScript, Rust, Python, HTML5, CSS
Frontend Frameworks: React.js, Vue.js, Pinia, Bootstrap, Materialize
Backend & APIs: Node.js, Express.js, RESTful APIs
Testing & Quality: Jest, SuperTest, Unit Testing, Integration Testing
Cloud Platforms - AWS: EC2, ECS with Fargate, S3, RDS, ALB, CloudFront, IAM, Lambda, API Gateway
Cloud Platforms - Azure: API Management, Entra ID (Azure AD), Key Vault, AKS (Azure Kubernetes Service)
Databases: SQL, MySQL, Oracle DB, IBM DB2, MongoDB, DynamoDB (AWS), CosmosDB (Azure)
DevOps & CI/CD: Docker, Kubernetes, GitHub Actions, Jenkins, Git, Ansible
Authentication & Security: LDAP, OAuth 2.0, JWT, SAML, Azure Entra ID, Single Sign-On (SSO)
Big Data & Streaming: Kafka, Spark, Hadoop, MapReduce, Scala
Tools & Methodologies: Salesforce Marketing Cloud (Data APIs), Data Integration & ETL, Agile/Scrum
Operating Systems: Linux (RHEL, Oracle EL, IBM AIX), Windows, Linux System Administration
Architected an end-to-end real-time data streaming pipeline to analyze application logs from multiple distributed servers and automatically alert stakeholders when critical log entries are detected. This system provides immediate visibility into production issues and enables rapid incident response.
Technical Implementation: Logs generated across multiple EC2 instances are automatically pushed to S3, triggering AWS Lambda functions that interact with an Akka Actor System for orchestration. The system fetches newly generated log files and transmits them to Spark for analysis through a Kafka messaging queue, ensuring fault-tolerant, scalable processing. The pipeline handles high-volume log ingestion with minimal latency and includes automated alerting for anomaly detection.
Used stack:
Github Repository
Developed a comprehensive cloud infrastructure management platform that enables teams to monitor, manage, and interact with resources across multiple cloud providers from a unified interface. This solution significantly improves operational efficiency by eliminating the need to switch between different cloud provider consoles.
Key Features: Real-time resource monitoring, cost tracking, automated deployments via Ansible integration, role-based access control, and cross-cloud resource comparison. The platform provides a centralized view of infrastructure footprint across AWS, Azure, and Oracle Cloud, enabling better decision-making and resource optimization. Includes RESTful APIs for programmatic access and integrates with existing CI/CD pipelines.
Used stack:
| Github Repository | | Demo Application |
Designed and implemented a high-performance client-server architecture for invoking AWS Lambda functions in a distributed computing environment using both gRPC and REST protocols. This project demonstrates the performance trade-offs between different communication protocols and showcases best practices for building scalable serverless architectures.
Technical Highlights: The system leverages gRPC for low-latency, high-throughput communication between distributed services, while also providing REST endpoints through AWS API Gateway for broader client compatibility. Uses ScalaPB for Protocol Buffer serialization, implements efficient connection pooling, and includes comprehensive error handling and retry mechanisms. The Lambda functions are orchestrated to perform parallel computations with results aggregated from S3 storage.
Used stack:
Github Repository
Developed a distributed data processing solution using the MapReduce programming model to analyze massive datasets efficiently. This project demonstrates expertise in big data technologies and the ability to design scalable algorithms for parallel processing of terabyte-scale data across multiple compute nodes.
Implementation Details: Built custom MapReduce jobs in Scala to perform complex data transformations, aggregations, and analytics on large datasets. Deployed on AWS EMR (Elastic MapReduce) for cloud-based processing and tested locally using Hortonworks Sandbox. The solution optimizes data locality, implements custom partitioners for load balancing, and includes performance tuning for memory management and data serialization. Handles fault tolerance through Hadoop's built-in mechanisms.
Used stack:
Github Repository
Built a feature-rich social media application using the MERN stack (MongoDB, Express, React, Node.js) that enables users to create profiles, share posts, interact through comments and likes, and follow other users. The platform demonstrates proficiency in building modern, responsive web applications with real-time interactions and secure user authentication.
Features & Architecture: Implements JWT-based authentication, real-time post updates, image upload functionality, user profile management, and a responsive feed algorithm. The React frontend utilizes modern hooks and context API for state management, while the Node.js/Express backend provides RESTful APIs with MongoDB for efficient data persistence. Includes input validation, error handling, and optimized database queries for improved performance.
Used stack:
| Github Repository | | Demo Application |
Designed and implemented a complete domain-specific language (DSL) in Scala that enables users to perform complex set theory operations with an intuitive, expressive syntax. This project showcases advanced programming language design principles, including lexical analysis, parsing, semantic analysis, and interpretation.
Language Features: Supports defining sets, variables with scoping rules, nested scopes, and evaluating binary operations (union, intersection, difference, symmetric difference, Cartesian product). Implements type checking, variable binding, scope resolution, and comprehensive error handling with detailed error messages. The language includes an interpreter that evaluates expressions and provides immediate feedback, making it an educational tool for learning set theory concepts. Demonstrates functional programming paradigms and compiler construction techniques.
Used stack:
Github Repository
Developed comprehensive simulations of cloud computing environments using CloudSim Plus framework to model and analyze various cloud architectures, resource allocation strategies, and scheduling algorithms. This project provides insights into cloud infrastructure design decisions and their impact on performance, cost, and resource utilization.
Simulation Scenarios: Implemented multiple datacenter configurations with varying VM allocation policies, task scheduling algorithms (First Come First Serve, Round Robin, Time-Shared), and network topologies. Analyzed performance metrics including execution time, resource utilization, energy consumption, and cost optimization. The simulations help in understanding trade-offs between different cloud service models (IaaS, PaaS, SaaS) and in making informed decisions about infrastructure provisioning. Includes detailed performance reports and visualization of results for comparative analysis.
Used stack:
Github RepositoryFeel free to reach out and connect with me on these platforms