Home | Tech | Big Data deployment efficiency optimized by Big Memory

Big Data deployment efficiency optimized by Big Memory

Font size: Decrease font Enlarge font
Big Data deployment efficiency optimized by Big Memory

Disruptive capacity advantages of Memory1 provides significant benefits over DRAM-only implementations, according to Inspur Systems and Diablo Technoligies.

As the volume and breadth of data continues to proliferate, businesses are faced with both opportunities and challenges associated with this deluge.  On one hand, there is a genuine benefit to be found in leveraging growing volumes of information to plan, act and react to continuously evolving business environments.  On the other hand, the amount of time, energy and resources required to collect and interpret this incoming data can easily outpace the value obtained.  While turning to in-memory data processing applications built on DRAM to secure real-time analysis may help satisfy the former, the massive system memory usage required to support DRAM's extremely high performance, as well as capacity and cost restraints associated with the technology, exasperate the latter.

With platforms like Apache Spark having emerged and gaining popularity among those looking to make intelligent, real-time business decisions, there is an increased need for memory capacity to enable the fastest access and most optimized system-level performance.  However, using excessive numbers of servers to design around DRAM capacity constraints leads to inefficient, high-cost deployments.  Instead, an approach that enables more memory per server by utilizing high-capacity NAND flash is better able to provide the unbeatable combination of business and economic value needed in Big Data environments, experts at Inspur Systems and Diablo Technologies found through close collaboration.

There is an increased need for memory capacity to enable the fastest access and most optimized system-level performance. Diablo Technologies' Memory1 is the first memory DIMM to expose NAND flash as standard application memory.  This revolutionary tiered-memory solution provides the industry's highest-capacity byte-addressable memory modules.  Memory1 provides significantly higher capacity than DRAM DIMMs, enabling dramatic increases in application memory per server.  This provides substantial performance advantages, due to increased data locality and reduced access times.  Memory1 also minimizes Total Cost of Ownership (TCO) by reducing the number of servers required to support memory-constrained applications like Apache Spark. 

"Dramatically expanding the application memory available in a single server directly addresses key issues found in traditional, DRAM-only deployments for Big Data processing platforms like Apache Spark," said Maher Amer, Chief Technology Officer. "Because each server is capable of doing more work, jobs can be more efficiently handled with fewer servers, which also minimizes the associated networking and operational expenses. A tiered NAND flash approach is key to providing the benefits of real-time analysis while minimizing the expense required to collect and interpret valuable information."



Join PRESIDENT&CEO on LinkedIn

Subscribe to comments feed Comments (0 posted)

total: | displaying:

Post your comment

  • Bold
  • Italic
  • Underline
  • Quote

Please enter the code you see in the image: