Data Engineer

Edinburgh/London/New York (Remote Accepted) Full-Time Experienced
What will you do

As a snap40 Data Engineer, you will work as part of our engineering and product team to build out scalable data pipelines that will impact every area of our engineering and product team. You will be involved in every stage of the product life cycle, deploy it out into the wild and see its positive impact on real people. You will work on new features as well as on our existing codebase.

As a specialist in data engineering, you will help us scale our data pipelines to meet new challenges as we grow as a business and gain increasing numbers of customers and use-cases. We are currently improving our event driven, message based microservice platform to embrace real-time, highly available distributed streaming technology which will enable our engineers and data scientists to meet our ambitious product goals over the coming months

About you
  • You are flexible and can learn on the job quickly
  • You enjoy solving problems and making a difference
  • You can pragmatically balance quality with a fast-paced schedule
  • You are a good team player, ready to help, debate, compromise and work together
  • You are comfortable working, prototyping and delivering incrementally, adapting based on customer needs and technical difficulties, always with the user in mind
  • You have an eye for detail and spotted the typo in the second bullet point
  • You always look at the big picture - don't bother with the previous point, there is no typo
We would like you to...
  • Have a degree in Computer Science, related field, equivalent training or work experience
  • Have a deep knowledge of at least one modern programming language and a willingness to learn new ones as required
  • Have experience writing tests and testable code
  • Be comfortable reviewing, releasing, deploying and troubleshooting your and other people's code
  • Bring experience to the team in areas of distributed real-time stream processing and complex event processing tech
  • Have previous success in engineering at scale in a distributed systems environment
  • Have a practical understanding of cloud computing and networking - we use AWS with Nomad for micro-service management
  • Have experience collaborating with data scientists, product teams and other consumers of data assets
Bonus points for...
  • Familiarity with key big data technologies, such as Hadoop, MapReduce & Apache Spark.
  • A background involving Apache Kafka or other distributed data streaming platforms
  • Experience with API design/development
Technologies we use
  • Backend: Java (Spring), Python, .NET
  • Frontend: JavaScript (TypeScript), Angular, Ionic, npm
  • Databases: PostgreSQL (RDS), Couchbase and others
  • Infrastructure: Linux, RabbitMQ, AWS via Terraform, Chef, Nomad, Consul and Fabio
  • Data Science and ML: H2O, Jupyter, TensorFlow, Keras and Spark
  • Monitoring: DataDog and ELK
About us

In just 2 years, we built our product, monitored 1000 patients, built a phenomenal team of 21 and gained EU regulatory approval. We raised one of the largest seed rounds in UK history. We're now bringing our product to some of the top healthcare providers in the world

We offer a flexible work environment, where you’ll have the autonomy and freedom to do what you do best. We are hugely ambitious and focussed, but we have fun. As a company we are supportive, trusting and transparent and we’re early stage enough that you’ll have the chance to build the company you want to work for.

Something to think about is the idea that you can go home after a days work and know that you could well have directly contributed to saving someone’s life; this gets everyone in the company very excited.

On top of that we provide:

  • Competitive salary
  • Stock options – our company is your company. We want to build it together
  • Spec your own dev environment
  • Free lunch every Wednesday
  • Remote friendly with support for flexible working

Apply for this position