DescriptionAbout Us
Gridcog helps a broad variety of organisations assess potential investments in electrical energy systems, usually grid-connected. We do this by allowing customers to build a model of their existing sites and energy assets, and run many alternative scenario simulations incorporating streams of time series data from live markets, forecasts, and real world meter data, leveraging a library of specifications of assets, pricing structures and financial schedules. Our software is powerful, flexible and complex.
Our very smart users aim to save time and money by not needing to build their own large Excel models and Python simulations. Our success lies in being extensively configurable. We need people who are smart, curious, love to learn, and are driven; who relish the challenge to do hard things right.
Our software is used by customers around the world, with a current focus on Australia, the UK and Europe. Customers include all participants in the energy system: suppliers, project developers, large energy users, and a growing ecosystem of technology providers and consultants.
We’re a small, talented and adaptable team. Our people have a diversity of skills and backgrounds, and we strive to use and develop our different capabilities over a variety of projects. We presently have around a dozen people in Engineering, around two dozen in the company – and we’re growing. We have clusters in London, Perth, and Melbourne, and few fully remote. For this role, our preference is a candidate based in Melbourne or Perth, but open to exceptional candidates elsewhere.
Role Summary
We’re looking for a talented Senior Software & Data Engineer to join our data team. Our platform is composed of a number of different subsystems built in Python and TypeScript, deployed to AWS, and accessed via GraphQL and a modern React app. The focus of this role will be the components involved in data processing and visualization, but an ability to work across these different languages and technologies will be advantageous.
A key component of the Gridcog platform is ingestion and processing of a wide variety of data sources related to energy generation, usage and prices. From energy regulators, energy suppliers, and customer assets such as solar and wind farms, and large scale batteries. You will help evolve and maintain our Python codebases focused on data processing and presentation, to help customers design and deliver the most efficient and effective energy transition outcomes.
RequirementsOur ideal candidate has:
- Deep and current experience with test-driven software engineering techniques, and data processing with polars, pandas, numpy, and similar.
- Experience designing and building data integrations with Python, operating on significant quantities of timeseries data
- Experience with a range of database technologies, both SQL and NoSQL.
For example, Dynamo, Mongo, postgres, and data warehouse tech such as Redshift, Snowflake, Clickhouse - Experience with Data Visualisation techniques across various tools, including analytics/BI platforms such Tableau, Looker, Grafana, or software libraries such as Altair/Vega, D3, Bokeh
- Experience with presenting high-volume time series data to end-users, for example in fintech, scientific modelling, or tech observability systems
- Experience with ETL/ELT pipelines and both structured and unstructured data stores
- Familiarity with a broad range of AWS services, IaC and serverless event-driven architectures
- Strong problem-solving and analytical skills
- Solid foundation in software design, data structures and algorithms
- System design skills: design robust, reliable and highly available services
- Ability to work collaboratively in both in-person and remote work environments
- Ability to communicate technical concepts clearly to technical and non-technical team members
- Experience with API design, database schema design, and automated testing
- CI/CD development experience and modern monitoring and observability techniques
- A background in energy markets, scientific computing, or financial markets modelling likely to be advantageous
- A growth mind-set, experience with startup SaaS, and an interest in the energy system transition all greatly beneficial.
What you’ll do:
- Build and take ownership of key components of our SaaS product, with a focus on data - data flows, processing, quality, verification, presentation and visualisation
- Utilise your in-depth knowledge of AWS services to build scalable, reliable, and highly available systems.
- Work on data ingestion, processing, aggregation, and data pipeline components to enable seamless data transformation.
- Design and implement APIs and Events to enable integration with other applications.
- Scalability and Performance: Optimise software components for performance and scalability to handle large data volumes efficiently.
- Documentation: Create and maintain clear and comprehensive documentation for software architecture and code.
- Collaborate with product managers, other software and data engineers, and data scientists to understand and address customer needs.
- Problem Solving: Troubleshoot and resolve software issues, including bug fixes, performance improvements, and enhancements.
Benefits - Competitive salary package aligned with experience and skills.
- Opportunity to work in a remote-first business with flexible working arrangements.
- Weekly opportunities for in-person collaboration at co-working spaces and an annual whole company retreat.
- Join a high-performing, unapologetic energy and tech nerd team to tackle significant challenges.
- Engage in a high-trust distributed team environment that values innovation and creative problem-solving.
- Contribute to the decarbonisation of the world's energy system.
- Time and budget support for ongoing professional and personal development.
- Opportunity for ESOP participation.