Location
London, England, UK
Salary
Not specified
Type
fulltime
Posted
Today
Job Description
**Help us build the future of real-time retail execution**
At Neurolabs, we're helping the world's leading CPG brands unlock real-time visibility across the supply chain. Our Image Recognition technology is powered by synthetic data and digital twins, giving customers scalable, accurate insights — fast. No manual audits. No delays. No retraining. Just immediate visibility into what's actually happening in-store.
We're looking for a
Mobile Developer (iOS)
who owns the full mobile layer - from the SDKs our partners integrate to the app our clients use in the field. You'll be deploying vision models on-device, shipping and supporting SDKs used by external engineering teams, and keeping a live product stable and improving. This is a hands-on role with real ownership - not a cog in a large mobile team, but the person who defines how we build and ship on mobile.
✨Why this role matters
Our mobile layer is the point where our technology meets the real world. The Mobile Developer owns that layer end to end - from the SDKs our partners integrate to the app our clients use in the field. What you build here directly determines how fast we can move with partners and how reliably our technology performs at scale.
Your mission:
own the mobile layer across SDK integrations, on-device ML, and ZIA Capture - and make it the most reliable part of what we ship.
🧠 What You'll Do
Lead SDK integrations with partners and clients
- You own the integration between 375's platform and Neurolabs' technology, ensuring both products work seamlessly together
- You drive mobile app integrations with key clients - Smollan, CPM, and Randstad - working alongside Customer Success to deliver these successfully
- You manage versioning, documentation, and direct support for partner and client engineers integrating our SDKs into their own products
Own our iOS camera SDK
- You maintain and extend our camera SDK library, covering real-time image quality checks including blur, focus, lighting, exposure, framing overlays, and domain-specific capture guidance
- You manage distribution via SPM and keep the SDK stable, documented, and integration-ready
Own the on-device ML pipeline
- You deploy models end to end - PyTorch to Core ML conversion, quantisation, and performance profiling across device tiers
- You run vision models including YOLO, Grounding DINO, and similar VLMs natively on device for real-time image processing
Maintain and extend ZIA Capture
- You keep ZIA Capture stable - resolving bugs and shipping feature development that connects the mobile layer to our backend technology
- You own authentication, REST/GraphQL API integration, and custom camera capture flows within the app
Improve our CI/CD pipeline
- You improve and extend the existing CI/CD pipeline for builds and releases, and maintain Cordova plugin integrations for current clients
Requirements
Experience and technical skills
You'll be working across a live iOS app, a camera SDK, and on-device vision models in active partner and client integrations. Here's what that requires.
- Startup background - has worked in a pre-seed, seed, or Series A environment, with evidence of wearing multiple hats, moving fast, and delivering under resource constraints
- Standalone mobile developer - has operated as the sole or lead mobile developer on a product, with end-to-end ownership from scoping through to shipping and maintaining in production
- On-device ML in production - has shipped a mobile feature powered by an on-device ML model, not just a prototype. Evidence of model conversion pipelines, quantisation, performance profiling, and edge case handling. Bonus for vision models such as YOLO, Grounding DINO, or similar VLMs
- SDK built from scratch - has designed and shipped an SDK as a product, with evidence of versioning, documentation, and direct support of integrating engineers
- Live app ownership - has maintained a live application over time, fixing bugs, shipping incremental features, and keeping a product stable
- External collaboration - has worked directly with external engineers or partner teams on a shared technical goal
- CI/CD and release ownership - has set up or meaningfully improved build and release pipelines
- Evidence of curiosity - personal projects, experimental work, or active use of AI tooling. Someone who builds outside of their day job and stays ahead of what is coming
Technical Skills
- Swift / iOS (native) - deep, primary platform focus
- On-device ML deployment - experience deploying vision models on-device using Core ML, ONNX Runtime, or TensorFlow Lite, with a solid understanding of model optimisation for mobile constraints
- Model conversion pipelines - PyTorch to Core ML or TFLite, with performance profiling across device tiers
- SDK development and distribution
- SPM packaging
- GraphQL integration
- CI/CD pipelines
- Active use of AI tooling
How You Work
- Accountable - you take full ownership of the mobile layer end to end. When something breaks, you're the first to investigate and own the fix
- Proactive communicator - you share progress and blockers upfront without being asked, and keep the team informed as a default
- Collaborative - you work closely with the Product Owner and wider team, and bring a team player mentality that makes you easy to work with
- Adaptable - you adjust quickly when priorities shift and are comfortable in a fast-moving environment
- Focused - you understand what matters most at any given time, direct your energy accordingly, and are comfortable saying no to work outside current priorities
- Self-motivated - you take ownership without close oversight and proactively drive your own output
- Pushes back constructively - you understand technical limitations and have the courage to challenge direction at the right moment
Benefits
- Equity share options - we operate an EMI share option scheme, so when the company grows in value, you share in it. Options vest over 48 months with a 12-month cliff
- Pension - we contribute 3% through Penfold via salary sacrifice, with 5% employee contributions. Auto-enrolment applies after a 3-month postponement period, though you can join sooner if you'd like
- Hybrid-first - minimum 2 days a week in our London office near Liverpool Street Station with flexibility built around delivery and trust
- Perks at Work - access to thousands of discounts across 30,000\+ brands via CharlieHR, including up to 44% off cinema tickets and 15% off high street retailers
- Cycle to Work scheme - purchase a bike and accessories in a tax-efficient way through salary sacrifice via Cyclescheme
- Holiday Purchase scheme - buy up to 5 additional days of annual leave per year, spread across monthly salary deductions
- 25 days annual leave plus bank holidays, with the option to carry forward up to 3 days
- Friday office lunch - every Friday in the London office, lunch is on us
- Company socials - throughout the year we take time to reflect, reconnect, and recharge together
- AXA private medical insurance - Neurolabs covers the full premium. Cover includes cancer treatment, diagnostics, physiotherapy, counselling, and European travel insurance. Dental and optical handbooks also available
Please note:
We are not accepting unsolicited CVs or candidate introductions from recruitment agencies. Any profiles submitted without prior agreement will be considered the property of Neurolabs, and no fees will be payable.
Looking for more opportunities?
Browse thousands of graduate jobs and entry-level positions.