
This white paper discusses the nature and scale of the workload for artificial intelligence applications as a background to the architecture of the Cerebras' waferscale neural network compute engine. The paper stresses the advantages of more cores, more memory close to cores and more bandwidth between cores. It also emphasizes the importance of archtectures to address sparse data and eliminate multiplication by zero. Read More
Disclaimer: by clicking on this button, you accept that your data might be communicated to this company. If you do not want us to communicate your data, please update your details on your profile
Download White PaperWhite Papers
Register to our newsletter