How would you like to help implement innovative cloud solutions and solve the most complex technical problems for customers who are looking to migrate to the cloud? Does the idea of being a critical part of building and running cloud infrastructure for strategic customers excite you?
Welcome to eCloudvalley, a go-to-Premier Consulting Partner of Amazon Web Services (AWS) where you can have fun while making history. Join us as a Data Engineer to help build the future for customers the right way!
Be part of a dedicated and culture-driven team where your ideas have the potential to affect companies positively by reshaping traditional mindsets and improving customer’s daily operations. Be a catalyst to define, build, and run AWS services in high-growth environments, and solve unique problems at a massive-scale across multiple AWS services.
As a Data Engineer, you will collaborate with some of the brightest technical minds in the industry today across an entire ecosystem of services and solutions on AWS. You will also be joining a team that invests in your success by providing comprehensive learning programs and work in an open environment that includes self-paced learning, hands-on projects, and project shadowing opportunities that will develop your knowledge and capabilities as a Data Engineer.
KEY JOB RESPONSIBILITIES
- Understand customer technical requirements and execute AWS solutions that address scalability, reliability, security, and performance.
- Collaborate with Solutions Architects, Project Managers, and the Cloud Engineering team on project implementations.
- Build strong technical relationships with customers and provide support during project implementations.
- Work closely with the customer's technical team and project teams to expedite project timelines.
- Maintain and enhance technical skills and knowledge while sharing expertise with internal teams and the technical community.
- Document best practices and contribute to the company Wiki and knowledge base.
- Support local and remote customers in assigned projects and managed services engagements.
- Lead technical workshops and knowledge-sharing sessions to disseminate expertise across the support team and organization.
- Enable effective decision-making by retrieving and aggregating data from multiple sources and compiling it into a digestible and actionable format.
- Measure the metrics of the business and visualize the metrics into Business Intelligence
- Perform high-performance data discovery, big data analytics, defining transformation logic, and data model.
- Understand end-to-end data design and management for data acquisition and processing, data quality, and governance for the Enterprise Data platform.
- Data collection and entry as needed.
- Data mining and statistical techniques to develop insights and transformational outcomes, visualize data and insight.
- Perform key technical implementation role in the areas of advanced big data techniques, including data modeling, data access, data integration, data visualization, data mining, data discovery, statistical methods, database design, and implementation.
QUALIFICATIONS
- Bachelor’s degree in computer science business administration, Information Systems, Statistics, Mathematics, or a related field, or equivalent experience in cloud computing. Bachelor's degree
REQUIRED SKILLS
- Previous work experience in Agile and Scrum methodologies practices
- Excellent analytical and problem-solving skills
- Passion for AWS technology and a demonstrated ability to learn quickly. AWS certifications such as AWS SysOps Administrator – Associate, AWS Developer – Associate, AWS Machine Learning Engineer – Associate / Specialty, AWS Data Engineer – Associate, or AWS Solutions Architect – Associate are a plus.
- 2-4+ years of experience designing, implementing, and supporting IT public cloud solutions.
- 2+ years of hands-on experience in architecting, designing, and implementing Data Pipeline on the AWS cloud.
- 3-5 years of experience in Data Analysis / Data Architecture / DWH / Data Lakes in a business setting, preferably in a client-facing consulting-oriented role
- Demonstrated Analytical ability, results-oriented, with external customer interaction.
- Strong data analytics experience with proficiency in SQL / Python / R / Spark
- Ability to understand end-to-end data architecture including Data Extraction, Data Transformation, and Data Modelling
- Experience in enterprise data platforms and solutions incorporating Big Data, AI Systems, and Cloud (AWS is a plus).
- Proficiency with modern cloud data warehouses (e.g. Snowflake, Redshift)
- Strong sense of ownership, positive attitude, and commitment to delivering high-quality work.
- Proficiency in English with the ability to communicate technical topics clearly and effectively.
- Consulting experience to deliver on customer requirements.
- Experience with open source or enterprise databases.
- Understanding of monitoring and troubleshooting in Data Pipeline infrastructure environments.
Preferred Qualifications:
- Graduate or Postgraduate degree in Information Science / Information Technology, Data Science, Computer Science, Engineering, Mathematics, Statistics, Physics, OR equivalent industry experience
- Experience with E-Commerce, Retail, and Business Analytics would be an advantage.
- Understanding of data warehousing, data modeling concepts, and building new DW tables
- Advanced SQL skills, fluent in R and/or Python, advanced Microsoft Office skills, particularly Excel and analytical platforms
- Experience developing analytics projects, such as Data Warehouse, Data Modelling, Business intelligence, and Microservices.
- Excellent problem-solving skills and be able to work through ambiguity.
- Effective communication skills to lead technical discussions and engage with customers.
- Apache Hadoop, Spark, Hive, Presto, and Distributed computing
- ETL and SQL/Databases, particularly with AWS Glue or Apache Airflow
- AWS Analytics Services such as Glue, Athena, Sagemaker, etc.
- Use machine learning to build a model of recommendation engine, classification, regression, clustering of data, and performance evaluation of machine model Have experience with automatic deployment and version control (Git)
- Understand the MLOps
- Understanding of Machine Learning systems i.e. TensorFlow, MXNet, and SageMaker