Apply Now
Back to Open Positions

Senior Data Engineer (Python, Remote)

Dashly is looking for an independent Senior Data Engineer who enjoys the informal, fast-paced startup environment with lots of freedom and responsibility, and with the following core skill set:

  • Python 3 (our main programming language)

  • DataOps practices (building and maintaining pipelines, processing automations)

  • Cloud-native development (Google Cloud Platform)

  • Solid English language skills (C1, or B2 at the very least)

Got you interested? Read more below:

What is Dashly?

Our mission is to give homeowners across the UK a better deal on their mortgage, saving them thousands of pounds each and every year. 

We’ve got an ambitious vision to create a suite of products that together will reshape the mortgage market for the good of all – borrowers, advisors, brokers and lenders. To do that, we’ve built a powerful mortgage search engine, data platform, together with web and mobile apps. Now it’s time for us to upgrade our platform to scale in line with our growth rate, and utilise our gained experience to design new data solutions that will help lenders to offer new, more efficient ways to design and distribute products. On top of that, we’re building tools that let advisors truly manage their customer relationships and empower them to improve the service they can offer their clients.

Who are we looking for?

We're looking for experienced data engineers with a passion for technology, software, elegant solutions, and clean code. You’ll become part of our engineering team, while closely working with our data operations team to build out automated automations for our data processing processes … Wait, what? Anyway, what we’re trying to say is that you’ll help us automate our data processing tasks, which we like to call Transform (convert, parse, extract, and generally clean up received data - imagine the process of transforming unstructured PDFs to structured JSONs/CSVs), Enhance (query our data sources to pull in missing information, improve accuracy of estimates, verify data validity), and Import (that is upload data into our platform and deal with any and all errors along the way).

We don't want you to just write code for us, we want you to be part of our feature development pipeline, and we want you to be involved (at least in some capacity) in all of its stages: analysis, design implementation, testing, deployment and documentation.

I’d be awesome if you had experience with all of the following, but we’re happy to work with you and teach you the things you need to know:

Programming & scripting

  • Python

    • Yup, that’s our main programming language, so this kinda goes without saying

    • You’re going to need at least some experience with the following libraries:

      • numpy, scipy, pandas, gcloud

    • We also extensively use (not an exhaustive list):

      • fuzzywuzzy, openpylx, xlswriter, pypdf, vertexai

    • If you know stuff like this, we’re probably going to love you very much:

      • pytorch, keras

  • Shell

    • Big part of your role is automation (both on your local machine, as well as on the cloud-deployed VMs)

    • Bash is our go-to tool for automation outside of the Python runtime

  • Infrastructure (GCP)

    • Building serverless workloads

      • Scripts in Cloud Functions, docker images in Cloud Run

    • Managing multiple data sources

      • Cloud SQL as our primary data store

      • Cloud Storage for objects (aka. files, but don’t tell anyone we said that :D)

      • BigQuery for data analysis, aggregation, and reporting

    • Pipeline orchestration

      • Triggers (for Cloud Functions and Cloud Storage)

      • PubSub and Eventarc for more complex orchestrations

    • Cool AI stuff

      • Vertex AI & the entire GCP AI Platform

    • Security & IAM

      • Working with sensitive PII data means there’s a huge emphasis on security, ie. you’ll need to understand IAM access configurations

Data reporting

  • Big part of our data operations is querying, investigating, and analysing our data, and building reports

  • We use a bunch of great tools to achieve that, all of which you’ll get access to

    • BigQuery Studio (data queries and analysis)

    • Collab Enterprise (basically Jupyter Notebooks, data investigation)

    • Looker Pro (for all those reports)

Personality traits

  • Openness and solid communication skills

    • You'll be talking to people from the creative, product and business teams

    • We want your voice to be heard

  • Willingness to learn new technologies

  • Proactive approach, initiative, and ownership of responsibility for your work

  • Proficiency in speaking and writing in English, as you'll be using it on a daily basis

Check out our entire tech stack description below 👇

What do we offer in return?

You'll be joining a dynamic startup environment. What does that mean? Well, if you're looking for corporate development plans organised into several weeks worth of work, outlined in project plans set in stone, with strictly defined assignments, lots of rules and processes and a do-as-you-re-told, don't-ask-questions attitude on top it all, you won't find any of that with us, sorry.

Instead, we're looking for independent mavericks, exceptional individuals, explorers and contributors who enjoy freedom and creativity in their work, and are willing to take initiative and responsibility for their assignments, or even whole portions of our technology stack.

We don’t apply any “agile” frameworks, don’t follow scrum guidelines, don’t waste time on pointless meetings and ceremonies. Great engineering is all we need.

Our core engineering values are:

  • Leadership and guidance works better than management and rules

  • People are more important than processes

  • Failure is a learning experience

  • Always act in the company's best interest

These values guide our leadership, our managers and our engineers all alike. We're building a team of individuals that strives for a common goal. A team where initiative, innovation and great ideas are welcomed and rewarded, where failing while trying to innovate is considered a success as long as we learn from it, and the only thing that's not allowed is passively sitting around, waiting for someone to tell you what you should be doing.

Okay, that was about enough with the buzzwords, let’s get down to specifics:

Employment type? 

Contract, Full-Time

We’ll agree on a fixed sum which you’ll be invoicing us each month, guaranteeing you a consistent and predictable income. We expect you to work roughly 40 hours a week for us in exchange.

The contract will have a 2-month termination period, guaranteeing stability and predictability for both parties.

Contractor? Does that mean no holidays? 

Absolutely NOT!

You’ll get 25 days of paid annual leave, on us! We want you to take time off every now and then and recharge

Oh, cool, and what about hardware?

We’ll provide you with a laptop, either a Mac or a Dell, whichever you prefer.

What’s the work environment like at Dashly?

We’re building a relaxed, free working environment built on transparency, communication and mutual trust. As long as everyone’s happy with your performance and results, nobody is going to tell you what to do and how to organise your time.

Flexible working hours:

  • Mandatory late morning daily stand-up

  • Expected general availability during working hours, especially morning and noon

  • But you can take a break anytime as long as you let your team members know in advance

  • You can choose when do you begin and when do you end your work day

  • We work fully remotely, not required office visits and mandatory corporate fun parties

  • But you’re always welcome to stop by at any of our offices if you wish to

Office space available in Prague (Czechia) and London (United Kingdom) for team meetings and get-togethers.

What about the tech stack?

(1/2: This part is about the back end platform …)

Dashly is running a microservice architecture on Google Cloud Platform written in Python (3.11 as of time of writing this text.).

We’re utilising the following communication protocols:

  • HTTP REST API (front end to back end network calls)

  • gRPC & Protobuf (internal back end service-to-service network calls)

And the following cloud services for core functionality:

  • Kubernetes (GKE)

    • running in Autopilot mode with GCE Ingress controller

    • with Helm for container deployments

  • CloudSQL (PostgreSQL 15, relational database)

  • Cloud PubSub (for asynchronous messaging)

  • Cloud Storage (as the object/file store)

  • Cloud Endpoints (automated HTTP-to-GRPC network calls transformation)

  • Cloud Operations (Stackdriver, for logging & monitoring)

  • Cloud Build (for CI/CD pipelines)

  • (Google Cloud should probably reconsider their product naming scheme?)

And additional cloud services for day-to-day operations outside the back end platform:

  • BigQuery (data analysis, business intelligence)

  • Firebase (JWT authentication, static asset hosting)

  • Vertex AI (for all the OCR, data analysis, and LLM magic)

Plus from the “DevOps” point of view, we have:

  • Terraform for GCP infrastructure management,

  • and last, but not least, Docker for containerization.

(2/2: And this part is about our UI applications …)

And all of the above is accessed via a couple of front end applications:

Apps utilise the following tools:

  • ngrx for state management,

  • storybook for component development,

  • primeng as the main component library,

  • webpack for local development and build compilation,

  • and a whole bunch of other useful tools and libraries.

We don't expect you to know all of the languages and technologies we use on day 1. Being an experienced senior in Python is enough, we'll give you the time and support to learn to work with all of our technologies. Experience with gRPC and Google Cloud is a serious advantage, but we're happy to work with you and support you to bring you up to speed. Same applies for front end development, say you have experience with Redux, but never worked with ngrx, no sweat, you’ll get the opportunity to learn the ins and outs of it.

We're investing heavily in a redesign of our application architecture and code refactoring. On top of that, we're also improving the way we handle our data, our entire infrastructure, and overall architecture of our solutions and our developer experience. This is a very exciting time to be a part of Dashly, and we're expecting a lot of input and initiative from you in our endeavour.

What tools do you use?

When it comes to day-to-day operations and work organisation, our teams use Google Workspace (Gmail, Calendar, Drive, Meet, Keep, Tasks, etc), Slack for messaging, Github as our repository and Jira and Confluence for issue tracking and documentation. Oh, and we’re big on visual analysis, so we use Miro for boards and diagrams.

Plus developer tools like Intellij IDEA (PyCharm), Gitkraken, and Postman.

What else should I know?

Our business and product teams are located in the UK (GMT time zone), our creative, marketing and engineering teams are located in Central Europe (CET timezone, GMT+1).



For more information call or email

Send us e-mail