Quick Memory


Reading time ( words)

Computer memory capacity has expanded greatly, allowing machines to access data and perform tasks very quickly, but accessing the computer’s central processing unit, or CPU, for each task slows the machine and negates the gains that a large memory provides.

Song Jiang, a UTA associate professor in the Department of Computer Science and Engineering, received an NSF grant to improve computer accessibility.

To counteract this issue, which is known as a memory wall, computers use a cache, which is a hardware component that stores recently accessed data that has already been accessed so that it can be accessed faster in the future. Song Jiang, an associate professor in the Department of Computer Science and Engineering at The University of Texas at Arlington, is using a three-year, $345,000 grant from the National Science Foundation to explore how to make better use of the cache by allowing programmers to directly access it in software.

“Efficient use of a software-defined cache allows quick access to data along with large memory. With memory becoming more expansive, we need to involve programmers to make it more efficient. The programmer knows best how to use the cache for a particular application, so they can add efficiency without making the cache a burden,” Jiang said.

When a computer accesses its memory, it must go through the index of all the data stored there, and it must do so each time it goes back to the memory. Each step slows the process. With a software-defined cache, the computer can combine or skip steps to access the data it needs automatically without having to go through the memory from the beginning each time. Jiang has studied these issues for several years and has developed four prototypes which he will test to determine if they can serve large memories without slowing CPU speeds at the same time.

The current trend in technology is toward using NVM or non-volatile memory. NVM is expected to be of much higher density, larger and less expensive, and will provide many terabytes of memory. Speeds will not change much, but the size will expand greatly, which will also increase the time necessary to go through the index. If Jiang is successful, speeds will keep pace with technology.

“As we ask our computer systems to work with increasingly large data sets, speed becomes an issue. Dr. Jiang’s work could provide a breakthrough in how software developers approach software-derived caches and, as a result, make it easier and less time-consuming to analyze big data,” Hong Jiang said.

Written by Jeremy Agor

Share


Suggested Items

NASA Space Communications and Navigation: Supporting Exploration

12/10/2018 | NASA
NASA spacecraft have studied Earth and the surrounding universe for more than 60 years, making ground-breaking discoveries and enabling human exploration. From hundreds, thousands and millions of miles away, these spacecraft must send their critical information back to Earth and the scientists who can use it.

Telecom (Compute and Storage) Infrastructure Market to Reach $16.35B in 2022

09/03/2018 | IDC
A new forecast from IDC sizes the market for compute and storage infrastructure for Telecoms at nearly $10.81 billion in 2017. However, as Telecoms aggressively build out their infrastructure, IDC projects this market to see a healthy five-year compound annual growth rate (CAGR) of 6.2% with purchases totaling $16.35 billion in 2022.

Flights Show Promising Technologies from Industry and Academic Partnerships

02/06/2018 | NASA
The technologies ranged from proposed new space suits to cryogenic propellant research, with implications for future NASA space missions as well as other research efforts.



Copyright © 2018 I-Connect007. All rights reserved.