Data Engineer, Analytics Systems

Data Engineer, Analytics Systems



Founded in 2014, this is a B2B fintech company that is transforming the way financial institutions find and connect with consumers. As the leading search, comparison, and recommendation engine for financial services, Even seamlessly bridges financial institutions (including American Express, Goldman Sachs, and SoFi) and channel partners (such as TransUnion and MoneyLion) via its simple yet robust API and embeddable marketplaces. We turn any consumer touchpoint into an ROI-driven, fully customizable, programmatic acquisition source with full compliance and security at scale. Our investors are leading financial services firms and VCs, including American Express Ventures, Canaan Partners, Citi Ventures, F-Prime Capital (Fidelity), Greatpoint Ventures, Goldman Sachs, LendingClub, and MassMutual Ventures. We are placed in the Top 50 of the 2020 Deloitte Technology Fast 500, awarded to the fastest growing tech companies in the world, and has originated over $3B in credit to date.

We are looking for a Data Engineer to join the Business Intelligence (BI) and Data Science (DS) team. You will primarily support the BI team with ETL of necessary tables and designing, building, and implementing performant tables to assist with analytics. In addition, you will also support the DS team with the maintenance and construction of various pipelines for data science analysis. This role requires familiarity with data modeling, schema design, ETL, big data processing, and some knowledge of data analysis. You will be reporting into Business Intelligence, but you will work on data initiatives across both Business Intelligence and Data Science. Additionally, you will also be responsible for meeting data engineering chapter best practices (technical standards) as part of the engineering function at the company.

Day to Day Responsibilities:

  • Design and build standalone services to support a variety of reporting needs, using data sources like SQL & Redshift (and eventually Kafka) and design patterns of your choice.
  • Leverage knowledge of dimensional modeling techniques and the ability to come up with a model which helps business users easily understand the data instead of expecting business users to understand how the operational system works.
  • Transform complex (and sometimes messy) data from disparate sources into clean, coherent data sets for data consumers.
  • Advise in the performant design, creation, management, and implementation of large datasets.
  • Document and map the existing data structures from the operational system to the new curated data structures.
  • Assess new technologies and assess their practicality for integration into our existing BI and DS pipelines and infrastructure.
  • Build efficient, flexible, extensible, and scalable ETL and reporting solutions.
  • Work with engineering and BI/DS to enable the appropriate capture and storage of key data points.
  • Collaborate with a cross-functional team to Implement processes and systems that automate manual processes, optimize data delivery, and re-design existing infrastructure for greater scalability.
  • Contribute views on approach and architecture. We like to collaborate, review each other’s code, and pair from time to time.

Highly Preferred Proficiencies:

  • Strong interest in financial services markets or business side problems (e.g. an accounting, finance, marketplace background), and making data driven recommendations.
  • Active and curious listener, always interested in digging deeper to find the optimal solution.
  • Understands the trade-off between perfection and execution. Even is a fast-paced company – you will need to balance data driven decision making with the rapid delivery of projects within tight time constraints.
  • Creative problem solving and challenging the norm
  • Bonus points: experience with Looker, Tableau

Core Competencies

  • You practice an extremely high standard of code quality & maintainability. “Programs are meant to be read by humans and only incidentally for computers to execute.”
  • 5+ years data engineering experience.
  • Deep experience with at least one compiled language (Java or Scala preferred) and one scripting language (Python).
  • Advanced working SQL knowledge and experience working with relational databases.
  • Experience with AWS cloud services, especially Redshift, EMR, Glue, Athena, Lambda, Snowflake.
  • Experience with Airflow or other workflow management tools (Dagster, Prefect, etc.) highly preferred.
  • Knowledge of distributed data processing and distributed data stores.
  • Strong analytic skills related to working with unstructured datasets.
  • Excellent business and communication skills; able to work with cross-functional business owners to develop and define key business questions, and to build data sets that answer those questions.
  • Excellent organization and attention to detail with the ability to prioritize multiple concurrent projects while still delivering timely and accurate results

To be considered for this or any other exciting role please email a copy of your CV to cloud

Job Location: New York USA

Apply for this position