What you will do is why you should join us :
- Be a critical senior member of a data engineering team focused on creating distributed analysis capabilities around a large variety of datasets
- Take pride in software craftsmanship, apply a deep knowledge of algorithms and data structures to continuously improve and innovate
- Work with other top-level talent solving a wide range of complex and unique challenges that have real world impact
- Explore relevant technology stacks to find the best fit for each dataset
- Pursue opportunities to present our work at relevant technical conferences
- Project your talent into relevant projects. Strength of ideas trumps position on an org chart
If you share our values, you should have :
At least 7 years experience in software engineeringAt least 2 years experience with GoProven experience (2 years) building and maintaining data-intensive APIs using a RESTful approachExperience with stream processing using Apache KafkaA level of comfort with Unit Testing and Test Driven Development methodologiesFamiliarity with creating and maintaining containerized application deployments with a platform like DockerA proven ability to build and maintain cloud based infrastructure on a major cloud provider like AWS, Azure or Google Cloud PlatformExperience data modeling for large scale databases, either relational or NoSQLBonus points for :
Experience with protocol buffers and gRPCExperience with : Google Cloud Platform, Apache Beam and or Google Cloud Dataflow, Google Kubernetes Engine or KubernetesExperience working with scientific datasets, or a background in the application of quantitative science to business problemsBioinformatics experience, especially large scale storage and data mining of variant data, variant annotation, and genotype to phenotype correlation