Storage startup brings memory-speed processing to enterprises

April 08, 2019 //By Rich Pell
Storage startup brings memory-speed processing to enterprises
MemVerge (San Jose, CA), the inventor of Memory-Converged Infrastructure (MCI), has launched from stealth and announced the beta availability of what it says is the first system that eliminates the boundaries between memory and storage to power the most demanding data-centric enterprise workloads.

Built on top of Optane DC persistent memory technology from Intel, the MemVerge system for the first time, says the company, collapses the memory-storage barrier so AI, IoT, and real-time analytics applications run flawlessly at memory speed, without the crashes and other challenges common today.

"The transformation of the data center is long overdue," says Charles Fan, MemVerge CEO and co-founder. "By eliminating the boundaries between memory and storage, our breakthrough architecture will power the most demanding AI and data science workloads today and in the future at memory speed, opening up new possibilities for data intensive computing for the enterprise."

"With MemVerge, companies can take full advantage of the larger memory capacity and unprecedented fast I/O from the persistent memory, without changing their application programming models," sats Fan. "This new architecture will revolutionize the infrastructure for all data-centric workloads in the next decade and beyond."

The MemVerge solution, says the company, delivers memory and storage services from a single distributed platform while integrating seamlessly with existing applications so that enterprises can process the constant flood of machine-generated data produced by the on-demand economy. The company's MCI system is claimed to offer 10X more memory size and 10X data I/O speed compared with current state-of-the-art compute and storage solutions in the market.

The proprietary Distributed Memory Objects (DMO) technology built into the MemVerge system provides a logical memory-storage convergence layer that harnesses Intel's new persistent memory to allow data-intensive workloads such as AI, machine learning (ML), big data analytics, IoT, and data warehousing to run at memory speed. The system expands memory seamlessly and stores data consistently across multiple systems so enterprises can analyze an enormous amount of data in real time, processing both large and small files with equal ease.

The company also announced that it had raised $24.5 million in Series A funding from investors Gaorong Capital, Jerusalem Venture Partners, LDV Partners, Lightspeed Venture Partners, and Northern Light


Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.