Ore. — In the 21st century, instead of depending on continually
shrinking design rules, microprocessor makers are harnessing multiple
cores for parallel execution. Memory chip architectures, however, have
not kept up, according to a cryptographer who claims to have created a
memory chip architecture for the 21st century—one that matches multicore
microprocessors with parallel, concurrent access to multiple memory
"My design borrows extensively from today's modern multicore CPUs,"
said Joseph Ashwood, an independent security cryptanalyst and design
consultant residing in Gilroy, Calif. Ashwood was lead cryptanalyst for
Arcot Systems in Santa Clara, Calif., before going independent in 2001.
"As far as concurrency goes, my memory architecture shares some features
with Fibre Channel."
According to Ashwood, his architecture provides parallel access to
bit cells on memory chips, breaking the serial bottleneck that is
strangling nonvolatile storage media like flash, with an architecture
that can be applied to any memory chip bit cell. The Ashwood memory
architecture works by integrating smart controller circuitry next to the
memory array on a single chip, providing parallel access to the array
for hundreds of concurrent processes, thereby increasing throughput and
lowering average access time.
"We have a new way of assembling the memory, with a few new elements
I was led to by my experience with cryptography. I am basically applying
very deep cryptographic techniques to memory architecture, resulting in
a unique new design that is very fast and compact. Bringing in these new
elements enables a lot of good things, especially concurrency,
permitting hundreds of simultaneous memory operations," said Ashwood.
"Compared to DDR, for instance, my architecture goes inside the chip
and reorganizes how the bit cells are accessed, thereby utilizing them
much more efficiently," he added. "Transfer rate is faster, too—for
instance, right now, DDR-II for DRAM only goes up to 12 Gbytes per
second, but our architecture can deliver 16 Gbytes per second when using
flash memory and is compatible with PRAM or any other nonvolatile
semiconductor memory cells."
Sound too good to be true? JoAnne Leff, founder of J.L. Associates
(New York), thought the same thing when she was first approached by
Ashwood to represent him in licensing the technology. So she sent the
design over to Carnegie Mellon University for confirmation.
"We were skeptical, of course, but Carnegie Mellon confirmed for us
that the Ashwood memory architecture really is a breakthrough in memory
design," said Leff. "Now we want to license it to all major players
involved in the applications of this technology, not only to improve the
performance of individual memory chips, but also to give users fast,
parallel access to solid-state drives."
Carnegie Mellon's evaluation for J.L. Associates claims that
solid-state drives using nonvolatile memory chips is an especially good
application, and that the Ashwood memory architecture could rejuvenate
the nonvolatile memory markets, such as flash, by improving their
performance today and tomorrow, since scaling to larger capacities also
"This new technology enables parallel data storage and access on a
single nonvolatile memory chip. The scalability created by this
technology provides superior speed in accessing and storing data with
higher storage capacity at a single chip level. With on-chip power
management, the technology is truly enabling for applications that
require on-chip high-speed data transfer for various high-capacity,
nonvolatile memory devices," said the Carnegie Mellon evaluation. "It
has been predicted by many people, such as Gordon Bell, that by 2015
personal devices such as PDAs and cell phones will need to have a
nonvolatile storage capacity of at least 1 terabyte. This memory
technology invention provides a solution to meet that need."
Ashwood, however, does admit to two downsides to his memory
architecture. First, it is still just a paper design. He plans to work
with licensees to implement his design on their memory arrays, but so
far only a software simulation has been completed.
"I have fully developed the memory chip architecture, and I have run
a software simulation to verify that it works, but so far I have not
done a design at the electrical signal level—that kind of detail is
dependent on who ends up licensing it," he said.
The second downside is that the parallel access overhead of the
Ashwood memory architecture slightly slows down memory access times to
individual memory cells—a disadvantage that is offset by its many
parallel access channels, Ashwood said.
"For instance, if a NAND flash chip has an access time of 20 to 50
nanoseconds today, adding my architecture would increase that access
time to 50 to 70 nanoseconds," he said. "But remember, during that time,
100 or more other memory retrieval operations could be in progress
concurrently, yielding an effective access time of just a few
nanoseconds per retrieval."
Late last year Ashwood filed a patent on his memory architecture, but
because chip makers could implement it before the patent is grated, he
is choosing to keep most of the architecture secret until the patent is
granted next year
"This architecture is so easy to implement—a chip maker could roll
out a working prototype in as little as three months," said Ashwood.
Ashwood, however, has disclosed the main outlines of its
functions—reorganizing the memory hierarchy for parallel access to a
chip's data—as well as described its features and made performance
comparisons with both DRAM and hard disks.
"Unlike traditional memory architectures that degrade in performance
as more bit cells are added to an array, our memory architecture's
performance increases each time you add cells," said Ashwood. "For
instance, because of the way our architecture scales, if you double the
capacity of our memory chip, it also becomes twice as fast as before."
Overhead is low, too, adding only about 3 percent to the die area of
a memory chip, according to Ashwood.
"Current flash cells are already extremely dense—they should be able
to fit a terabyte in less than a cubic inch," said Ashwood. "The problem
is that their yields would fall to 30 percent and the speed to just 32
million bytes per second. Utilizing my technology with the same flash
cells allows a yield in the upper 90 percent [range] and a speed of 16
billion bytes per second."
When using the Ashwood memory architecture with multiple flash chips
configured as a solid-state disk (SSD), instead of adding 3 percent,
each memory chip's die area could actually be smaller than it is today,
because the access circuitry to the SSD would be common to all the
memory chips on the drive, he said.
Using his memory architecture will also increase the lifetime of a
drive, according to Ashwood. Flash cells can only endure about 100,000
cycles before they burn out. By offering greater flexibility in the page
reassignment, he said, the Ashwood memory architecture can increase the
lifetime of a drive by about 500 times.