👐 I’m actively seeking an end-of-study internship in AI, Generative AI, Machine Learning, or MLOps, where I can apply my expertise in model deployment, pipeline automation, and real-time AI solutions.
🎯 I’m currently working on building an advanced MLOps pipeline to fine-tune and serve large-scale models like Llama3.1 with optimized efficiency.
🤝 I’m eager to collaborate on projects that involve deploying deep learning models in production environments and developing end-to-end generative AI solutions I’m diving deep into LoRA fine-tuning techniques while also exploring Snowflake frameworks for seamless model fine-tuning and deployment.
💬 Feel free to ask me about implementing CI/CD pipelines for containerized web applications using GitHub Actions and Azure Kubernetes Service.
Pinned Loading
-
Deep-Learning-Projects
Deep-Learning-Projects PublicPerceptron , Multi Layer Perceptron , CNN , RNN , LSTM ,GRU , Tansformer
-
Advanced-Rag-system-
Advanced-Rag-system- PublicEnd To End Advanced Rag Project Using Open Source LLM models and Groq Inferencing
Python 2
-
cold-email-generator-
cold-email-generator- PublicEnd To End GEN AI Project , Using Open source LLMS and Groq Inferencing
Python 3
-
Chat-messages-Web-Analysis
Chat-messages-Web-Analysis Publica Full-stack web application to perform Data Analysis on a Hugging-face-dataset
Python 2
-
Function-Calling-using-llama3.1
Function-Calling-using-llama3.1 PublicTesting Function Calling while running llama3.1 locally using ollama
Python 2
-
Backend-Devhub-Project
Backend-Devhub-Project PublicPartie Backend: Authentification sécurisée, gestion des projets, consultants et clients. Utilisation des guards, interceptors et middlewares. Basé sur NestJS, TypeORM, JWT, CRUD
TypeScript 2
If the problem persists, check the GitHub status page or contact support.