Services
Four disciplines, engaged separately or in combination.
Each engagement begins from the constraints — the data, its required location, the users, the governance under which the resulting system will operate — and those constraints, rather than a preferred architecture or vendor, shape the work that follows. The four disciplines below describe the shapes that engagement most commonly takes.
AI product engineering
Full-stack delivery of language-model platforms, agent frameworks, and bespoke artificial-intelligence interfaces — deployed inside the client's tenant, with a data architecture designed to keep it there.
- Language-model applications across OpenAI, Anthropic, Azure OpenAI, and self-hosted models, including extensions of open-source platforms such as LibreChat.
- Agent and tool-use architectures, including Model Context Protocol (MCP) servers for enterprise integration with Microsoft 365 and SharePoint.
- Custom Copilot-style extensions, Microsoft Teams applications, and internal Copilot workflows.
- Evaluation harnesses, prompt-regression testing, and model-routing strategy for production multi-model deployments.
- Targeted fine-tuning and lightweight post-training, applied where the evaluation demonstrates that it is justified.
Reference engagement
Chat:R — an extended LibreChat deployment serving approximately one thousand staff at Rud Pedersen Group, with a purpose-built SharePoint Model Context Protocol gateway and three bespoke enterprise features.
Read the case study →Applied machine learning
Computer vision, time-series forecasting, and interpretable predictive modelling for clinical, operational, and financial settings. Methods are chosen by problem rather than by trend; baselines are built where practical; honest evaluation precedes delivery.
- Computer vision and segmentation on real imagery — including deep-learning and classical baselines for comparison.
- Time-series forecasting and continuous-learning systems, typically with gradient-boosted methods and careful validation.
- Interpretable predictive modelling, with SHAP and related methods, for settings in which the explanation is part of the deliverable.
- Retrieval-augmented generation pipelines over document and briefing corpora: ingestion, chunking, embedding, hybrid retrieval, reranking, and evaluation.
- Public-dataset joins — HM Land Registry, Energy Performance Certificate, Office for National Statistics — and geospatial analytics where the analysis calls for it.
Prior work
A U-Net cell-spheroid segmentation model for a UCL Royal Free Hospital research group, with a validation intersection-over-union of 0.968 and Dice coefficient of 0.983; residual-value forecasting across an electric-vehicle fleet (gradient-boosted trees with continuous learning); an interpretable house-price model with SHAP-based explanation over public-register data.
Background on the founder →Platform and deployment
Tenant-bound cloud deployment of AI systems — the infrastructure, identity, and operational choices that decide whether a working prototype can be adopted in production. The studio works at an architecturally literate level here, and pairs with specialist cloud engineers when the engagement calls for deep individual-contributor platform work.
- Azure-first service selection: App Service, Virtual Machines, Container Instances, Storage, Key Vault, AI Search.
- Infrastructure-as-code with Bicep; GitHub Actions with federated OpenID Connect authentication; multi-environment continuous integration and deployment.
- Identity and authorisation design: Microsoft Entra single sign-on, domain-scoped access, managed identities, private endpoints.
- Observability through Application Insights and Log Analytics; incident response and runbook authorship.
- Multi-cloud capable (Amazon Web Services, Google Cloud Platform) where the engagement constraints prefer it; architecture is led by the problem rather than by a single vendor preference.
Reference engagement
The Chat:R deployment operates across three Azure environments, with Bicep-authored infrastructure, federated GitHub Actions pipelines, Microsoft Entra single sign-on, and a considered pivot from managed Kubernetes to virtual machines that reduced monthly operating cost by approximately one half.
See the infrastructure notes →AI strategy and governance
Published advisory, programme design, and readiness assessment for organisations moving past proof-of-concept artificial intelligence. Informed by peer-reviewed and government-published work, including the founder's own contributions.
- AI risk, governance, and cyber-security advisory — drawing directly on the UK Government DSIT whitepaper on cyber-security risks to artificial intelligence (Barua et al., 2024).
- Sector-specific deployment guidance — drawing on the Jersey Finance published guide to artificial intelligence in Jersey's finance industry (McCay and Barua, 2024).
- Generative-AI programme design at enterprise scale, including portfolio planning, use-case triage, and cross-divisional delivery.
- Build-versus-buy assessments across enterprise platforms — ChatGPT Enterprise, Microsoft Copilot, self-hosted, and bespoke — presented with honest cost and capability modelling.
- Internal enablement: workshops, written training material, and champions-network design for organisations adopting AI at scale.
Prior work
A generative-AI programme at Grant Thornton of approximately ten million pounds in committed value, comprising twenty-five-plus production use cases, eight million pounds-plus in projected efficiency savings, and two thousand-plus staff demonstrations. Named co-authorship on two 2024 publications: the UK Government DSIT whitepaper on cyber-security risks to artificial intelligence, and the Jersey Finance guide to artificial intelligence in Jersey's finance industry.
Background on the founder →