Q&A: Design principles for multi-environment AI architectures


Datacom’s AI and infrastructure experts – Matt Neil (Director – Data Centres), Mike Walls (Director – Cloud) and Daniel Bowbyes (Associate Director – Strategy) – discuss when centralised compute makes sense for AI, and how to orchestrate AI across edge, core data centres and cloud. The team shares governance, readiness and architectural approaches to enable reliable multi-environment AI.

When does centralised cloud or core data centre compute make the most sense for AI workloads?

Mike Walls, Director – Cloud: Centralised compute is sensible when workloads benefit from scale, governance and uniform platform capabilities that are harder to achieve in distributed setups. Think large‑scale training, platforms or workloads requiring a consistent, controlled environment with robust security and regulatory compliance. These can be more cost‑effective or easier to manage in a core data centre or central cloud. Private clouds are also options when organisations need tighter control, governance or data‑handling assurances, or when workloads don’t require the low‑latency edge path.

Are you seeing customers combine multiple environments for a single AI solution, and if so, how does that typically work?

Walls: Yes, AI is increasingly distributed, and edge, core data centres and cloud each have a role. A typical pattern we’re seeing and advising on is organisations placing latency‑sensitive, real‑time tasks at the edge (or near‑edge) and heavier training, model development and data‑intensive processing is being placed in core data centres or the cloud. Public cloud obviously allows for quick experimentation and scale, whereas private or sovereign cloud may be more effective when running persistent production large language models (LLMs) or in meeting compliance needs. This multi-environment approach requires clear orchestration, data pipelines and governance to ensure consistency, security and compatibility across environments. Datacom’s uniquely placed to be able to provide capabilities (infrastructure, governance or applications) as well as offer platforms, tooling or bespoke services to support multi‑environment deployments.

Matt Neil, Director – Data Centres: Customers are already mixing multiple AI components and tools, for example, one tool might generate code (Claude), another reads documents (OpenAI) and they’re brought together for different functions. They’re using different tools and agents that then need to be integrated into an overall workflow. It’s a maturity journey: we’re seeing organisations moving from piecing together separate software to building a cohesive ecosystem. 

How does Datacom’s regional data centre footprint across Australia and New Zealand support distributed AI strategies?

Neil: In New Zealand, our data centres have a genuine regional footprint, serving four of the largest cities and enabling New Zealand–wide coverage (North Island and South Island). That allows workloads to run closer to where they need to be, including localised AI deployments in places like Christchurch. In Australia and New Zealand, we can support distributed AI and help customers operate across borders if that’s what they need. Datacom’s data centre ownership means we can offer end-to-end hosting and infrastructure closer to our customers, which is a strong enabler for distributed AI.

From a cloud and hybrid perspective, how does Datacom help customers design AI architectures that span cloud, data centre and edge?

Walls: We’re able to provide a cohesive, use case to platform, multi‑environment strategy that integrates private cloud with public cloud and edge capabilities, plus governance and tooling to support AI workloads. This involves helping inform on the service models (GPU‑based, bespoke builds or platform/tooling) and helping customers design architectures that span multiple environments while addressing data governance, security and operational consistency.

Daniel Bowbyes, Associate Director – Strategy: To build out strategies for our customers, we draw on our breadth of AI professional and software development services, AI security services, AI sovereign platforms, public cloud partnerships and hosting facilities. Customers can safely and swiftly adopt and infuse AI across their IT landscape with Datacom working alongside them.

If you had to summarise Datacom’s approach to AI infrastructure in one idea, what would it be?

Walls: Datacom’s AI infrastructure strategy is simple: match each use case to the right model, tools and platform – edge, core data centre or cloud – based on the task, business requirements, maturity and governance needs, with clear ownership and scalable tooling to orchestrate across environments.

Neil: Own and operate the core infrastructure to offer an end-to-end, trusted AI infrastructure platform. In other words, our data centre capability is a unique differentiator that lets us deliver the full stack – from infrastructure to governance – so we can act as a trusted advisor and provide a complete, end-to-end AI solution.

With AI evolving so rapidly, how much does uncertainty about the future influence infrastructure decisions being made today?

Neil: A lot. Organisations often don’t know what they want to do with AI or what use cases to pursue, which makes it easy to waste money on the hype. The right approach is to understand potential use cases, adopt a framework and consider a “try before you buy” approach, including sandbox environments, pilot infrastructure and vendor partnerships to help customers experiment safely. This reduces risk and helps shape a practical, scalable path forward rather than rushing into big, expensive bets.

Bowbyes: AI is the fastest-ever adopted technology, with a huge amount of ongoing development and investment that will likely mean the current leaders in both AI software and hardware will change over time. At the same time, the opportunities that AI represents to positively disrupt business are huge and can’t be ignored.    

Every organisation faces a unique set of challenges and opportunities, so how they lean into AI and the risks they are prepared to take will be very different. For organisations that have heavy data processing and research requirements, the risk of infrastructure obsolescence will likely be less than the cost of consuming an ‘as a service’ offering (which has infrastructure obsolescence baked into the price). For many other organisations, consuming ‘as a service’ offerings will be less risky in the short to medium term than investing in infrastructure.

Walls: The uncertainty argues for a flexible, staged and modular approach rather than long‑lead commitments to a single path. To combat some of the concerns organisations may have, we recommend a funnel‑based readiness framework to help organisations identify their AI use cases and goals (training, inference, coding tasks) and then choose appropriate architectures and services. Because AI is changing quickly, decisions today should prioritise adaptability, pilot testing and options that can be extended or re‑configured as requirements sharpen, rather than locking into a single, rigid model.

Learn how Datacom is partnering with organisations to move from AI strategy to scalable practice – designing, piloting, and scaling secure AI across diverse environments.

Glossary

  • Edge/near-edge: Compute resources located close to data sources or end users to reduce latency. Near-edge refers to a closely proximate layer, often in metro areas.
  • Multi-environment AI architecture: An approach that intentionally uses edge, core data centres and cloud to balance latency, governance, cost and data residency.
  • Orchestration: End-to-end management of AI models and data across environments, including deployment, data movement and lifecycle operations.
  • Governance: Policies and controls governing data handling, model usage, security, compliance, risk management and auditability across environments.
  • FinOps: Financial operations practices for AI and cloud spend, including cost visibility, budgeting, optimisation and cost control across environments.
  • Data residency: Regulations governing where data is stored and processed, often tied to geographic or regulatory requirements.
  • Data sovereignty: Legal authority over data, including access rights and regulatory obligations that can constrain data movement across borders.
  • Data locality: Proximity of data to where it is processed, influencing latency, bandwidth and regulatory considerations.
  • Latency: Time delay between input and output, typically measured in milliseconds. It’s critical for real-time AI tasks.
  • AI workloads: Categories such as training (model learning), inference (runtime predictions) and generative/agentic AI (code generation, chat, autonomous decision-making).
  • Service models for AI deployments: Examples include GPU-based infrastructure, bespoke builds or platform/tooling offerings that support multi-environment AI.
  • Funnel readiness framework: A structured approach to identify AI goals (training, inference, coding tasks) and map them to suitable footprints, services and governance controls.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *