Unfortunately, this job posting is expired.
Don't worry, we can still help! Below, please find related information to help you with your job search.
Some similar recruitments
Senior Big Data Software Engineer
Recruited by Zillow 8 months ago Address , Remote $157,100 - $250,900 a year
Big Data Etl Developer
Recruited by Sparksoft Corporation 11 months ago Address , Remote
Junior Cloud Application Developer (Pss3), Cmsru
Recruited by Rowan University 11 months ago Address , Glassboro, 08028, Nj
Senior Big Data Engineer
Recruited by Charles Schwab 11 months ago Address , Lone Tree, 80124, Co $135,000 - $145,000 a year
Big Data Sdet Jobs
Recruited by ASCENDING 11 months ago Address , Remote
Senior Staff Engineer, Big Data
Recruited by Nagarro 11 months ago Address , Remote
Data/Cloud Enterprise Architect Consultant
Recruited by Voya Financial 1 year ago Address , Remote $140,080 - $192,610 a year
Cloud And Data Intern
Recruited by Munich Re America 1 year ago Address , Princeton, 08540, Nj $21 - $24 an hour
Big Data Architect Jobs
Recruited by Leidos 1 year ago Address , Remote $118,300 - $245,700 a year

Analytics Developer (Big Data Cloud) - Intermediat

Company

Bayer

Address , Remote
Employment type CONTRACTOR
Salary
Expires 2023-07-18
Posted at 11 months ago
Job Description

Remote
Contract (7 months 10 days)
Published 8 days ago
Scala
Python
Relational Databases
AWS / GCP
Big Data
API's
written and verbal communication skills
Data Engineer
NoSQL datastores

Position Overview:


The Climate Corporation is revolutionizing the agriculture industry with a platform and products that are helping the world?s farmers sustainably increase productivity with digital tools. The Data and Analytics team is focused on creating competitive advantage for Climate and our customers through novel data infrastructure, metrics, insights and data services. We are a small but rapidly growing engineering team that builds and leverages state-of-the-art analytics systems. Our work informs decisions and direction for our business, while also impacting our products.


What You Will Do:

  • Work with cutting edge open-source platforms such as Hadoop, Yarn, Flink, Spark etc.
  • Help design and build a Data Warehouse. Build and maintain the core data model, ETL/ ELT, core data metrics and data quality.
  • Design and implement highly scalable data-intensive processing systems
  • Develop infrastructure to inform on key metrics, recommend changes and predict future results
  • Design and implement REST APIs and data processing pipelines for mobile, web and 3rd party integrations
  • Champion data warehousing best practices
  • Collaborate with product, data scientist, and engineering on the next generation of the industry-leading agriculture platform
  • Ensure our codebase is continuously deliverable and deployable.
  • Actively participate in the design and code review process across the team.
  • Produce high quality code with an emphasis on software craftsmanship.
  • Help design and build Feature Engineering pipelines for production models.
  • Rapidly prototype new analytics views and work directly with stakeholders across multiple functions (Science, Marketing, Sales, Risk, Finance, Product)
  • Work with multiple cloud systems like AWS and GCP and its services


Basic Qualifications

  • Experience designing database schemas for complex and evolving data sets
  • 5+ years of experience with Python,Scala and/or Java
  • A successful history of manipulating, processing and extracting value from large, disconnected datasets.
  • Experience building scalable backend services (REST APIs, microservices, designing and implementing efficient data processing algorithms, messaging paradigms, middleware, persistent store)
  • Experience with messaging paradigms and middleware's. (SQS, JMS etc)
  • Good understanding of cloud design considerations and limitations and it's impact on Pricing
  • Experience with Amazon Web Services (EC2, S3, RDS, SQS, EMR etc.) (Strong Plus)
  • 5+ years of experience in architecting, designing, developing, and implementing cloud solutions on AWS/GCP cloud platforms
  • Proven ability to collaborate with multidisciplinary teams of business analysts, developers, data scientists, and subject-matter experts
  • 5+ years of experience working with relational and NoSQL datastores
  • 2+ years of experience with Spark, Flink and/or distributed computing.
  • 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.


Preferred Qualifications:

  • Experience with Python and Scala.
  • Experience in several of the following areas: database architecture, ETL, business intelligence, big data, feature engineering, advanced analytics
  • Experience with Big Data on GCP BigQuery, Pub/Sub, Dataproc, Dataflow
  • Experience working in both AWS and GCP cloud systems
  • Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.


IMPORTANT NOTE for POTENTIAL US CANDIDATES: Bayer expects its colleagues to be fully vaccinated against COVID-19. Bayer defines fully vaccinated in alignment with CDC which is two weeks after completing the two-dose vaccine regimen or two weeks after completing the one-dose regimen. Additionally, Bayer colleagues are also required to comply with state, local and customer requirements.


MON1JP00030119