Data Engineer vacancy at Tessian
With Tessian, you predict and preempt security risks and prevent human error.
Description of the job
We’re looking to bring another Data Engineer into the Tessian family.
We'd love to meet someone who:
- Is a highly-skilled developer who understands software engineering best practices (git, CI/CD, testing, reviewing, etc) and infrastructure as code principles.
- Has experience working with distributed data systems such as Spark
- Has designed and deployed data pipelines and ETL systems for data-at-scale
- Has a deep knowledge of the AWS ecosystem and have managed AWS production environments
- Has experience with scheduling systems like Airflow
- Ideally has been involved in machine learning infrastructure projects from automated training through to deployment
- Has an ability to break down ambiguous problems into concrete, manageable components and think through optimal solutions
- Enjoys “getting their hands dirty” by digging into complex operations
- Takes a high degree of ownership over their workIs a clear communicator with professional presence
- Has strong listening skills; open to input from other team members and departments
Data Engineering at Tessian As a high-growth scale-up, our email datasets are growing at an exponential rate. This is a great as it allows us to train best-in-class machine learning models to prevent previously unpreventable data breaches, however we're at the scale where our current data pipelines aren't where we want them to be: this is why we’re looking to bring another Data Engineer into the Tessian family. You will sit in our Data Engineering team and work day-to-day with our Data Scientists to build out infrastructure and pipelines, empowering teams to iterate quickly on terabytes of data. You'll be hugely impactful in this high-leverage role and we strongly believe that if we can query all of our data, in near real-time, and using scalable systems, then we can deliver limitless value to our clients through the data breaches we prevent.
Some interesting projects we’re working on:
- Building an Ingestion system to process Insights from different models using Kafka and Spark
- Designing the next generation data-lake setting ourselves up to handle massive future scale
- Creating a framework allowing us to standardise how we deploy all our ML models to production
On a day-to-day basis you'll get to
- Build systems to efficiently handle our ever increasing volume of data
- Design and implement data pipelines as well as owning the vision of what our systems could achieve in the future
- Work with Data Scientists to train, version, test, deploy and monitor our machine learning models in production
- Design systems to expose data to our product and engineering teams in a performant way