We are seeking a Data scientist proficient with a strong background in Python, Statistics, Operations Research, and AI/ML tools and frameworks. The ideal candidate will have 6 to 8 years of experience in relevant fields.
Required Skills:
Python Programming: Strong proficiency in Python.
Statistics and Operations Research: Practical knowledge and working experience.
AI/ML Tools and Frameworks: Experience with Flask, PySpark, PyTorch, TensorFlow, Keras, Databricks, OpenCV, Pillow/PIL, Streamlit, d3js, DashPlotly, and Neo4j.
AWS Services: Hands-on experience with AWS AI/ML services like Sagemaker, Canvas, and Bedrock.
Machine Learning Techniques: Understanding of predictive and ML techniques like regression models, XGBoost, random forest, GBM, neural networks, SVM, etc.
NLP Techniques: Proficient with RNN, LSTM, and attention-based models; experience with models from Stanford, IBM, Azure, OpenAI.
SQL: Good understanding of SQL for efficient data querying.
Version Control: Hands-on experience with version control tools like GitHub or Bitbucket.
MLOps: Experience deploying ML models into production on platforms like Azure and AWS.
Business Analysis: Ability to understand business needs and map them to business processes.
Agile Methodology: Experience with agile project delivery.
Visualization: Good at conceptualizing and visualizing end-to-end business needs both at a high level and in detail.
Communication: Good communication, listening, and probing skills.
Analytical Skills: Strong analytical and problem-solving skills.
Interpersonal Skills: Strong interpersonal skills, collaboration with team members, and ability to work effectively in a team.
Key Responsibilities:
Understand and address business issues with valuable solutions.
Design factual, AI, and deep learning models to solve business problems.
Develop and deploy statistical, ML, and DL models into production.
Identify and augment accessible information sources.
Create innovative data visualization graphs using tools like d3js, DashPlotly, and Neo4j.