The continuous increase in size and complexity of deep neural network (DNN) models leads to rapidly increasing demand for computing capacity which has outpaced the scaling capability of conventional monolithic chips. Chiplet-based DNN accelerators have emerged as a promising solution for continuous scaling. Nevertheless, the metallic-based interconnects in these accelerators create a bottleneck due to the high latency and excessive power consumption during long-distance communication. Researchers at George Washington University have created a novel idea to overcome the communication bottleneck. Our researchers propose SPACX, a chiplet-based DNN accelerator design that exploits disruptive silicon photonics technology to enable seamless low-overhead communication.