‹ Back to careers

Senior Data Engineer

Filevine is a Legal AI company delivering Legal Operating Intelligence for the future of legal work. Grounded in a singular system of truth, Filevine brings together data, documents, workflows, and teams into one unified platform — where modern legal work happens with clarity and consistency. Powered by LOIS, the Legal Operating Intelligence System, Filevine connects context across every matter to transform legal operations from reactive to proactive. LOIS reads, understands, and reasons across your data to surface insight, automate complexity, and give professionals the clarity and confidence to see more, know more, and do more. Fueled by a team of exceptional collaborators and innovators, Filevine's rapid growth has earned AI awards and recognition from Deloitte and Inc. as one of the most innovative and fastest-growing technology companies in the country.

Role Summary

    A Senior Data Engineer at Filevine is a hands-on individual contributor who designs, builds, and operates the data systems that power LOIS, our analytics products, and the agentic AI experiences our customers rely on. This role sits within the Data Engineering team and is focused on optimizing and extending Filevine's conversational self-service analytics solution — making natural-language access to legal operational data faster, smarter, and more reliable. You will partner closely with product, analytics, and AI engineering to turn raw legal and operational data into trusted, query-ready, agent-ready data products. We recognize and expect this role to spend up to 30% of time on non-coding activities including design, review, and cross-functional collaboration.

What You'll Do

    - Optimize and improve Filevine's production usage of Snowflake and Cortex features — including warehouse management (usage, sizing, monitoring, etc.), clustering, query performance tuning, cost governance, and storage efficiency.
    - Own and evolve our agentic data modeling and natural language data retrieval (text-to-sql) capabilities: build and curate semantic models, refine prompts, expand verified question libraries, and measure answer quality so that natural-language analytics get more accurate over time.
    - Design and build batch and streaming data pipelines that ingest, transform, and model data from Filevine's product, CRM, billing, and telemetry systems into trusted, well-documented data products.
    - Build the data foundations that power agentic AI workflows and LOIS — including feature pipelines, retrieval datasets, and low-latency serving paths for LLM-based reasoning over customer data.
    - Establish reliability and governance standards including data quality checks, lineage, monitoring, incident response, access control, and PII handling consistent with our compliance posture.
    - Partner with product and engineering stakeholders to define event contracts, model business concepts (matters, firms, users, billing) consistently, and reduce ambiguity across downstream consumers.
    - Lead the evaluation and adoption of emerging tools across the modern data stack, recommending right-fit solutions that align with Filevine's strategic and security goals.
    - Provide technical mentorship within the Data Engineering team, contribute to code reviews and design documents (DDs/ADRs), and help raise the bar on data engineering practice at Filevine.
    - Participate in on-call rotations to maintain SLAs for production data pipelines and analytics surfaces.

What You'll Need

    Required

    - 5+ years of professional data engineering or backend engineering experience, with a proven track record of delivering production-grade data systems that drive measurable business outcomes.

    - Significant hands-on experience operating a modern cloud data warehouse in production (e.g., Snowflake, BigQuery, Redshift, Databricks, Synapse, or equivalent) — including performance tuning, warehouse and cost management, role-based access control, and orchestration of warehouse-native compute (stored procedures, UDFs, streams/tasks, or equivalent).

    - Demonstrated experience building with Agentic AI or LLM-powered systems in production — e.g., RAG pipelines, tool-using agents, MCP servers, warehouse-native LLM functions (such as Snowflake Cortex, BigQuery ML, or Databricks AI), or comparable frameworks.

    - Expertise in advanced SQL and Python for building reliable, well-tested data pipelines and transformations.

    - Experience with modern data modeling and transformation tooling such as dbt, including testing, documentation, and backward-compatible model design that supports self-service analytics.

    - Experience with workflow orchestration (Airflow, Dagster, or similar) and cloud-native deployment on AWS, Azure, or GCP.

    - Strong fundamentals in data modeling (dimensional, star/snowflake schemas), distributed systems, performance tuning, and data quality / observability principles.

    - Professional experience with modern software development methodologies: Agile/Kanban, Git, CI/CD, and DevOps.

    - Excellent written and verbal communication skills, with the ability to explain complex technical and data concepts to both technical and non-technical stakeholders.

    - B.S., M.S., or Ph.D. in Computer Science, Information Systems, Engineering, or a related field — or equivalent professional experience

     

    Nice to Have

    - Hands-on Snowflake experience, including Snowpipe, streams/tasks, data sharing, and cost/governance tuning at scale.

    - Experience with Snowflake Cortex Analyst specifically, including authoring and iterating on semantic models and verified queries.

    - .NET / C# experience, or familiarity with reading and integrating against a .NET-based application backend.

    - Experience using modern UI development tools, particularly Svelte or React

    - Experience supporting machine learning workflows: feature stores, training datasets, or real-time scoring infrastructure.

    - Experience in SaaS or product-led growth environments, including product analytics and revenue/usage telemetry.

    - Infrastructure-as-code experience (Terraform), containerization (Docker, Kubernetes), and deployment (Octopus).

    - Familiarity with the legal tech domain, document-heavy data, or working with unstructured data at scale.

    - Track record of mentoring engineers and contributing to hiring and team-building.

     

    What You Can Expect

    - You will be a core builder of the data and AI foundations that LOIS and Filevine's product surfaces are built on. 

    - Your work will directly shape how legal professionals query, reason over, and act on their data — and will determine how fast, accurate, and trustworthy our agentic AI experiences become.

Cool Company Benefits:
- A dynamic, rapidly growing company, focused on helping organizations thrive 
- Medical, Dental, & Vision Insurance (for full-time employees)
- Competitive & Fair Pay
- Maternity & paternity leave (for full-time employees)
- Short & long-term disability
- Opportunity to learn from a dedicated leadership team
- Top-of-the-line company swag
 
Privacy Policy Notice
Filevine will handle your personal information according to what’s outlined in our Privacy Policy.
 
Communication about this opportunity, or any open role at Filevine, will only come from representatives with email addresses using "filevine.com". Other addresses reaching out are not affiliated with Filevine and should not be responded to.