Overview

We are seeking a Senior Data Engineer to join our team to design, build, and scale robust data pipelines for collecting, transforming, and structuring large volumes of legal and financial data collected via scrapers. You will collaborate closely with AI/ML engineers, DevOps, Front-end and Back-end teams to ensure smooth and efficient data workflows integral to the platform.
Client:
Our client is a leading legal recruiting company aiming to build a data-driven platform specifically designed for lawyers and law firms. The platform brings everything together in one place — news and analytics, real-time deal and case tracking from multiple sources, firm and lawyer profiles enriched with cross-linked insights, rankings, and more.
Project Overview:
The platform aggregates data from hundreds of public sources including law firm websites, deal announcements, legal databases, and media publications creating a unified ecosystem of structured and interconnected legal data. It combines AI-driven enrichment, automated data processing, and scalable infrastructure to ensure comprehensive and reliable coverage of the legal market.

Responsibilities:
  • Design and implement data ingestion pipelines to collect and process structured and unstructured data from multiple online sources (web scraping, APIs, feeds, etc.).
  • Develop and optimize ETL/ELT workflows using Python, Apache Spark, and SQL.
  • Build and orchestrate scalable data workflows leveraging AWS services such as EMR, Batch, S3, and SageMaker.
  • Develop and deploy internal data APIs and utilities supporting platform data access and manipulation.
  • Implement robust text extraction and parsing logic to handle diverse data formats.
  • Ensure data quality through validation, deduplication, normalization, and lineage tracking across Raw → Curated → Enriched data layers.
  • Containerize and orchestrate data workloads using Docker and native AWS solutions.
  • Collaborate closely with AI, Back-end, and Front-end teams to ensure efficient data integration and flow.
Required Qualifications:
  • Proven expertise in Python programming.
  • Hands-on experience with Apache Spark and SQL for distributed data processing.
  • Solid understanding of the AWS ecosystem, including EMR, Batch, S3, Lambda, SageMaker, and Glue.
  • Practical experience with Docker and containerized development workflows.
  • Experience with web scraping, text extraction, or other data ingestion techniques from diverse online sources.
  • Strong analytical mindset, communication skills, and ability to collaborate across multiple teams.
Note:

Our intelligent job search engine discovered this job and republished it for your convenience.
Please be aware that the job information may be incorrect or incomplete. The job announcement remains the property of its original publisher. To view the original job and its full details, please visit the job's URL on the owner’s page.

Please clearly mention that you have heard of this job opportunity on https://ijob.am.