Tallinn • Remote
The shift towards self-serve analytics means that in many companies, virtually everyone (from support agent to CEO) is directly consuming data. It has fallen on the data engineers to keep these new (and demanding) data ‘customers’ satisfied, providing the infrastructure that delivers actionable insights from raw data.
Data engineers report being continually bombarded with queries via Slack; when a BI dashboard breaks, a metric seems ‘off ', a table is missing data, a user requires access to a dataset, to name a few. Prioritising these ad-hoc tasks on the fly, alongside their core responsibilities, without compromising on security and privacy, is no walk in the park.
Alvin acts as a data engineer’s co-pilot; giving them the context they need to make better decisions faster. Alvin plugs into data tools and extracts metadata that enables use cases such as: impact analysis (what happens if I delete this column?), problem tracing (why did this dashboard break?) and usage analytics (is anyone using this column?).
Our core tech builds and maintains a real time data structure that maps data between tools in a unified way. It is this that powers our data lineage feature (currently in private beta). We’re still early stage, so are looking for engineers who are excited about helping us shape the product, the culture and the business.
The Alvin Team (so far…)
Martin was the technical founder at Unacast, where he developed an unnatural love for massive datasets, and is an active open source contributor. Dan has lent his skills to scaling data orientated startups with lofty ambitions; from eliminating food waste to giving people ownership of their personal data. Marcelo will join us very soon, bringing to Alvin his 7 (SEVEN!) GCP certifications, regular blog posting, and extensive experience taming metadata in the wild. We aim to be a highly skilled and diverse team of 10 by the end of ‘21.
Our product principles
Build, learn, repeat. So far our strategy has been to conduct pilot projects, most notably with TransferWise. We have taken those learnings, along with countless conversations with helpful data folks, and released our private beta to try and learn even more. And so it continues.
Laser focused on our users: data engineers. Data governance, the space we broadly fit into, tends to be top down; tools and rules thrust upon data engineers. Instead, Alvin is being built by and in close collaboration with data engineers, solving their real ‘hair on fire’ problems by augmenting (rather than interrupting) their workflows.
Deep integrations that ‘just work’. Alvin becomes more powerful with every big data tool (e.g. Snowflake, Looker, Airflow, dbt) it connects with; it feeds on metadata after all. We’re continually adding new integrations based on the greatest potential impact, whilst sticking to our principles of zero setup and metadata only access.
Taking what we have (some cool tech), and crafting that into a product (Alvin) is more exciting than daunting. You don’t want your contribution to only be measured in lines of code. We push each other's thinking to places we wouldn’t have found on our own.
Your broad experience and capacity to learn quickly renders our precise tech stack unimportant to you. We’ll certainly be more interested in the type of engineer and person you are, than the technologies you can list. But if you’re interested, we dabble in Python, Vue, Typescript and Kotlin.
You’re confident talking about the big data space, and feel a genuine passion for the problems we’re solving. A score of at least 60% in the ‘Pokemon or Big Data’ quiz a plus, but not required.
We’ve secured significant seed funding from an award-winning and hands-on Nordic VC. You will be offered a competitive salary and meaningful equity.
You won’t be excluded from consideration based on where you want to be physically located while working. That said, we’d be delighted to have you with us in Lift99 (cool co-working space), Tallinn. Friendly visa rules mean we’re able to move you here from anywhere in the world.