Home » Job Alert » IT Jobs » Rolls-Royce Recruitment 2019 | Data Engineer | BE/ B.Tech – CSE/ IT, ECE, E&C | Pune | February 2019

Rolls-Royce Recruitment 2019 | Data Engineer | BE/ B.Tech – CSE/ IT, ECE, E&C | Pune | February 2019

Company: Rolls Royce India Pvt Ltd
Rolls-Royce is a world-leading provider of power systems and services, for use on land, at sea and in the air. We’re proud to have a strong presence and an 80-year heritage in India, and are excited about our growing future in Bangalore. Through innovative solutions and diverse, globally renowned products, we’ve already changed the aerospace industry in India.
Rolls-Royce-Logo2BJobs2BAlert2BOcean
Rolls-Royce Limited is a renowned British car-manufacturing and, later, aero-engine manufacturing company founded by Charles Stewart Rolls and Sir Frederick Henry Royce on 15 March 1906 as the result of a partnership formed in 1904.
Powering more than 50% of flights to and from India, we are poised to become an engineering hub in the region and committed to growing our footprint for high-end technology.
Company Website: www.rolls-royce.com
Positions: Data Engineer
Experience: 3 Years
Salary: Best In Industry
Job Location: Pune
Qualifications & Experience:
  • Bachelors/ B Tech/ BE with a relevant specialization and a minimum of 3 years of professional experience as Data Engineer and/or Software Developer
  • Advanced working knowledge of SQL and experience working with relational databases
  • Extensive experience with data processing and ETL to design complete data pipelines workflows for different projects and use cases
  • Knowledge of technologies from at least 3 of the following areas: NoSQL (e.g. Elasticsearch, MongoDB, Cassandra), Message Queues (e.g. Kafka, RabbitMQ) and Big Data (e.g. Hadoop MapReduce, Spark, HBase)
  • Knowledge and experience with modern software development solutions based on micro services and application containerization (e.g. Docker)
  • Experience with CI/CD tools (e. g. Jenkins, GitLab CI, Travis CI, Atlassian Stack, GIT,…) especially in the area of ETL automation
  • Good Linux knowledge and scripting (at least Python, Bash)
  • Experience with at least one of the major cloud providers AWS, Azure or Google Cloud and with automated deployments in the cloud
  • Experience with Openstack and Kubernetes is a plus
  • Knowledge of technologies in the fields of data analytics, visual analytics, machine learning, search and text mining is a plus
  • Result-oriented, rigorous and independent way of working
  • Fluent in English, fluent in German is a plus
  • Strong critical thinking, analytical mindset and problem solving skills
Key Accountabilities:
  • Responsibly for the data processing and provisioning / ETL process, using state of the art architecture and technologies for data pipeline workflows (e.g. RDBMS, NoSQL, Hadoop, Kafka, etc.)
  • Enabling access to resident and external data sources
  • Helping data scientists to prepare data
  • Assisting with initial data exploration steps (binning, pivoting, summarizing and finding correlations, etc.) and cataloging existing data sources
  • Support the development and automation of MVPs and data analytics solutions within the scope of use cases defined in cooperation with various brands, markets and departments
  • Helping to streamline a better data supply chain for analytics that goes from experimentation into production
  • Automatic deployment of data science environments on multi-cloud platforms
  • Independent advancement of CI/CD initiatives on multi-cloud platforms
  • Realization of innovative data processing solutions in the area of Artificial Intelligence
  • Independent scouting of emerging technologies in the areas of data science, data processing, cloud, micro services, automation and machine learning
Last Date to Apply: 14th February 2019
Application Link: Click Here

Leave a Comment