Overview
The Senior Data Architect will play a key role in evolving the client’s data platform architecture. The primary objective of the role is to guide and support the transition from a traditional Parquet‑based data lake to an open table format, enabling scalable data management and secure data sharing through Databricks. This position combines strategic architectural ownership with hands‑on technical contribution. The successful candidate will work closely with the in‑house engineering team, reviewing and challenging the existing architecture and roadmap, providing expert recommendations, and supporting implementation where required. The role also involves mentoring engineers and helping establish best practices for large‑scale data ingestion, transformation, and sharing. Client: Our client is a technology company specializing in large‑scale historical market data analytics. They work with extremely high‑volume datasets, processing and transforming billions of records into analytical data products used by financial market participants worldwide. Their platform is built on modern cloud data technologies, including Databricks and Snowflake, and operates at terabyte‑to‑petabyte scale. Project Overview:
- Review and assess the current data architecture, pipelines, and storage strategy, identifying areas for improvement and optimization
- Guide the migration from a Parquet‑based data lake to an open table format enabling Delta sharing through Databricks
- Design and support data sharing approaches using Databricks, ensuring scalability, security, and performance
- Contribute hands‑on to data engineering and architectural work when needed
- Support the design and optimization of large‑scale data ingestion and transformation processes handling terabytes to petabyte of data
- Collaborate closely with the internal engineering team, providing technical guidance and mentoring
- Business trips in London as required to support design workshops, implementation, and knowledge transfer
- Extensive hands‑on experience with Databricks, including Delta Lake in production environments
- Strong expertise in Snowflake, including data modeling, performance optimization, and integration patterns
- Proven experience designing and operating large‑scale data platforms handling very large datasets
- Solid understanding of open table formats (such as Delta Lake, Apache Iceberg, or Apache Hudi)
- Experience with data lake and lakehouse architectures and modern data sharing concepts
- Ability to work effectively at both architectural and implementation levels
- Strong communication skills and experience working collaboratively with engineering teams
- Experience working with financial market data
- Prior involvement in data sharing or data product platforms
- Cloud platform experience (AWS, Azure, or GCP)
✨ Our intelligent job search engine discovered this job and republished it for your convenience.
Please be aware that the job information may be incorrect or incomplete. The job announcement remains the property of its original publisher. To view the original job and its full details, please visit the job's URL on the owner’s page.
Please clearly mention that you have heard of this job opportunity on https://ijob.am.



