3 min read

Power consumption and performance are critical strategic challenges for enterprises around the world. As AI workloads scale and digital transformation accelerates, the traditional networking and computing infrastructure is challenged to support these needs. A new approach is required that combines both lower power requirements with higher speed and near zero latency to realize the AI Infrastructure of the future.
 
One way to deliver on these needs is through a combination of optical technologies and a new approach to computing architecture, infrastructure, and data exchange: a concept known as DCI (Data-Centric Infrastructure). Based on IOWN (Innovative Optical and Wireless Network), DCI is proposed by NTT,Inc.(NTT) and being developed through collaborative efforts within the IOWN Global Forum. Organizations that embrace DCI early on will realize advantages in sustainability, scalability, cost control and innovation readiness for a new AI Infrastructure.

What is Data-Centric Infrastructure (DCI)?

DCI is a next-generation computing architecture designed to orchestrate compute, network, and storage resources based on optical data transmission without distance or hardware boundaries.

It’s a distributed approach for data centers and computing resources with monitoring and management that co-ordinate compute, storage, and networking resources across long distances based on an intelligent system of optical technologies. It aims to deliver high-performance infrastructure while minimizing energy use by dynamically allocating network and computing resources based on the optimal the time and place for data processing and communication.

NTT has moved DCI towards implementations, including data center-to-data center connections and direct server deployments, and has shown measurable reductions in power consumption while achieving the same effectiveness as if the compute was in the same data center. Central to this reduction is the Composable Disaggregated Infrastructure which enables flexible resource sharing and orchestration across multiple server cores. By layering application control software over this architecture, NTT has demonstrated the effectiveness of dynamic GPU pooling and compute resource allocation.

What sets DCI apart

Built on the IOWN Open APN, leveraging optical technologies rather than traditional internet networks, DCI emphasizes open industry standards developed by the IOWN Global Forum, encouraging multi-vendor participation and innovation. It acts as a single, unified infrastructure, delivering real-time performance at scale while dramatically improving energy efficiency.

While DCI is similar to other concepts, like composable computing, it is also unique because it is developed based on a foundation set by the IOWN Global Forum (160 industry-leading companies) and it stands out from other architectures because of:

Multi data center integration over APN

DCI makes it possible to treat multi data centers, connected by Open APN (All Photonics Network), as one large, unified data center. This allows for data centers to be distributed and then placed closer to renewable energy sources or lower-cost power sources, or further away from urban areas for a more sustainable AI infrastructure.

Open collaboration

Developing DCI with the IOWN Global Forum encourages a broader ecosystem and increases innovation and choice. By working with industry technology leaders, the long-term adoption of DCI is more likely than a proprietary version of a similar approach.

Energy Power Efficiency through Photonics Electronics Convergence

The NTT Photonics Electronics Convergence (PEC) device enables optical data transmission inside the server between CPUs, GPUs, boards and within chip packages. This is a critical step toward realizing energy-efficient computing infrastructure. As PEC technology continues to evolve, future generations are expected to enable even higher integration, faster interconnects, and broader applicability across AI-centric data workloads.

Showcasing DCI at Expo 2025

To make these innovations tangible, NTT showcased three practical real-world use cases at Expo 2025 where, IOWN formed the backbone of the use cases.

  • The NTT Pavilion: Emotion-responsive architecture

    At the Expo 2025, NTT demonstrated how DCI could create immersive, data-driven experiences. The NTT Pavilion featured a curtain of cloth that would ripple and move – not with the wind, but in response to human emotion.

    – Cameras inside the pavilion captured visitor facial expressions.

    – Real-time AI analysis determined the level of excitement based on the number of smiles.

    – That data was transmitted to a remote data center in Osaka City using RDMA over APN (Remote Direct Memory Access over the All-Photonics Network), where servers processed it and triggered actuators to move the curtain in response.

    This entire pipeline, from AI video analysis to data transfer to actuation, used one-eighth the power it would have when work on the DCI project began in 2020. This improvement in energy efficiency is due to a combination of three technologies: 1) photonics-electronics convergence switch, 2) enhanced data processing with dedicated accelerators, and 3) resource optimization tailored to the scene.

    The system allows seamless use of remote pooled servers and GPUs as if they were on-site, enabling real-time inferencing with minimal latency and power use. It’s a compelling example of how AI, edge computing, and optical infrastructure converge in the real world.

  • The NTT Pavilion: Safe operation

    In a similar example, NTT used the same technology and concept as outlined above – involving cameras, real-time AI analysis, and RDMA over APN data transmission – to monitor congestion around the NTT Pavilion and maintain personal safety for Expo 2025 attendees.

    Specifically, the goal was to detect when anyone fell. Fall detection and congestion detection require the fastest possible response so the security team can act quickly to help. The event site, however, included technical constraints that required compromises – for example, limited server installation space and massive sensor data with real-time processing. This normally causes lower image quality (which leads to reduced accuracy) or slower response times.

    NTT showed how this could be overcome by using RDMA over APN for low-latency connections and preparing sufficient computing resources on the large data center side, thereby enabling real-time safety measures and fall detection.

    This example demonstrates how massive AI data analysis can be deployed effectively at events or similar occasions.

  • Technology demonstration: Location-free computing

    Another demonstration 2025 showed that high-performance computing can be achieved even if the data is located far from computing resources. NTT calls it “location-free computing.” High-performance computing would include AI inferences – the ability of an AI model to make predictions or conclusions – using huge amounts of data.

    In the VIP Lounge, NTT used a 10 km optical cable to connect a server installed inside a compute rack to another server which had a small Non-Volatile Memory Express (NVMe). It found AI inference processing time was essentially the same when done locally versus remotely using this technology – and this underscores how suburban data centers can serve dense urban areas without the need to consume real estate or power in the city itself.

    Here are the technical details of how the demonstration worked:
    – There are two different types of AI inference processing which are compared in terms of the completion time for inferencing a hundred 8K images.

    – One is local AI inference processing. Those images are located in the same server.

    – The other is remote AI inference processing. In this demonstration, those images were 10Km away from the AI inference server. Those images needed to be transferred before AI inferencing.

    – With the data transfer accelerated by NVMe over RDMA, the data transfer rate nearly matched link speed, delivering near-instantaneous access across long distances.

    – The demonstration showed little difference of completion time between local and remote AI inference processing.

Why it matters for CXOs

For business leaders, these demonstrations are a preview of strategic advantage.

  • Sustainability:

    Enterprises face mounting pressure to meet environmental goals. DCI and optical infrastructure offer a path to massive energy efficiency gains, without compromising on performance.

  • Scalability:

    Optical networks enable truly elastic computing, allowing compute and storage resources to be pooled and allocated based on real-time needs across wide geographies.

  • Cost Control:

    Reduced power usage directly translates to lower operating costs, particularly in AI-heavy workloads and data-intensive operations. Also, idle capacity at different times of the day can be used more efficiently when higher needs arise in other areas.

  • Innovation Readiness:

    Whether it’s immersive customer experiences, real-time analytics, or remote diagnostics, the future of enterprise applications demands low-latency, high-bandwidth infrastructure. Optical networks deliver that edge.

Optical networks and the future of enterprise AI infrastructure

As the world accelerates the adoption of AI, the challenge is that today’s infrastructure cannot support that change. The AI Infrastructure of the future has to be developed in a new and collaborative way in order to address business and society needs, and it has to be more sustainable. NTT along with the IOWN Global Forum is approaching this by developing an infrastructure architecture based on DCI and optical technologies.

These technologies are already operational today and transforming how data is processed and moved. NTT has showcased this through the immersive expositions, including real-time AI inference across 15 km distances, proving that IOWN and DCI are key elements to enabling the sustainable AI infrastructure of the future that business, governments and society will need.