Marvell has announced its new controller for affordable and miniature SSDs, the 88NV1160. The chip can be used to build small form-factor SSDs in M.2 as well as BGA packages. The 88NV1160 supports all modern and upcoming types of NAND flash, LDPC error correction, NVMe protocol and other advantages of modern SSD controllers, but it does not require external DRAM buffers so to reduce BOM costs of upcoming SSDs.

The Marvell 88NV1160 is a quad-channel controller that supports PCIe 3.0 x2 interface, NVMe 1.3 protocol (in addition to AHCI) as well as various types of NAND flash memory, including 15 nm TLC, 3D TLC as well as 3D QLC with ONFI 3.0 interface with up to 400 MT/s transfer rates. The 88NV1160 controller is powered by dual-core ARM Cortex-R5 CPUs along with embedded SRAM with hardware accelerators to optimize IOPS performance. The chip supports Marvell’s third-generation LDPC error correction technology (which the company calls NANDEdge ECC) in a bid to enable high endurance of drives featuring ultra-thin TLC or 3D QLC memory.

Specifications of Marvell 88NV1160 at Glance
Compute Cores Two ARM Cortex-R5
Host Interface PCIe 3.0 x2
Protocol of Host Interface AHCI, NVMe 1.3
Supported NAND Flash Types 15 nm TLC
3D TLC
3D QLC
Supported NAND Flash Interfaces Toggle 2.0 and ONFi 3.0, up to 400 MT/s
Page Sizes Unknown
Number of NAND Channels 4 channels with 4 CE per channel (16 targets in total)
ECC Technology LDPC (third-generation LDPC ECC by Marvell)
Maximum SSD Capacity 1024 GB (when using 3D QLC ICs with 512 Gb capacity)
Maximum Sequential Read Speed 1600 MB/s
Maximum Sequential Write Speed Unknown, depends on exact type of memory
Power Management Low power management (L1.2) design
Package 9 × 10 mm TFBGA package
Voltages 3.3V/1.8V/1.2V power supply (according to M.2 specs)

The 88NV1160 controller is specifically tailored for upcoming affordable SSDs, which is why it does not officially support SLC and 2D MLC NAND. Maximum capacity of a 3D QLC-based SSD featuring the 88NV1160 controller is expected to be around 1 TB, which should be enough for entry-level SSDs (as well as solid-state storage solutions for premium tablets, ultrabooks and other types of computing devices). As for performance, Marvell mentioned 1600 MB/s maximum read speed for such SSDs.

The new chip from Marvell is made using 28 nm process technology and is shipped in 9 × 10 mm TFBGA package, which can be used to build SSDs in BGA (M.2-1620 and smaller) packages as well as drives in M.2-2230/2242 form-factors. The 88NV1160 controller uses 3.3V/1.8V/1.2V power supply, in accordance with the M.2 standards.

The 88NV1160 is not the first controller from Marvell that does not require any external DRAM buffers. The company also offers low-cost 88NV1120 with SATA interface as well as 88NV1140 for PCIe 3.0 x1 SSDs. All of the aforementioned controllers are based on two ARM Cortex-R5 cores, feature Marvell’s third-gen LDPC implementation and support modern types of NAND flash memory (15nm 2D TLC and 3D TLC/QLC). However, the new 88NV1160 is the newest DRAM-less controller from the company, which is designed for rather advanced SSDs with up to 1600 MB/s read speed. Still, the 88NV1160 is clearly a solution for affordable drives because unlike the high-end 88SS1093 (or its less advanced brother, the 88SS1094) it does not support 2D MLC and SLC NAND flash and cannot take advantage of eight NAND channels (which is why it does not need PCIe 3.0 x4).

Comparison of Modern SSD Controllers from Marvell
  88NV1120 88NV1140 88NV1160 88SS1093
Compute Cores Two ARM Cortex-R5 Three cores
Host Interface SATA PCIe 3.0 x1 PCIe 3.0 x2 PCIe 3.0 x4
Protocol of Host Interface AHCI AHCI, NVMe 1.3 NVMe 1.1
Supported NAND Flash Types 15 nm TLC
3D TLC
3D QLC
15 nm SLC/MLC/TLC
3D NAND
Number of NAND Channels 2 channels
4 CE per channel (8 targets in total)
4 channels
4 CE per channel (16 targets in total)
8 channels
4 CE per channel (32 targets in total)
ECC Technology Marvell's third-gen LDPC-based ECC technology
Host Memory Buffer No Yes Yes -
Package 8 × 8 mm
TFBGA
9 × 10 mm
TFBGA
BGA
Compatibility M.2/BGA SSDs M.2/2.5" SSDs

The developer did not reveal when it expects the first SSDs based on the 88NV1160 controller to hit the market, but it indicated that the chip is available for sampling globally. In addition, the company indicated that it offers turnkey firmware to its customers so to enable faster time to market.

Source: Marvell

POST A COMMENT

25 Comments

View All Comments

  • mindless1 - Sunday, September 4, 2016 - link

    You would prefer to pay a premium for a relatively slow niche SSD product? Frankly for same money or less you'd be as well off tossing a $20 PCIe x1 SSD controller into the system then using a conventional SATA SSD.

    I could be wrong, someone could make an inexpensive PCIe SSD but I don't see the sales rate being enough to bring it down near SATA SSD price points. Besides, PCIe slots are usually more precious than SATA ports.
    Reply
  • rpg1966 - Wednesday, August 17, 2016 - link

    Question from a dummy: this supports 15nm TLC among others. But why do controllers care about the transistor feature size of the memory they're controlling? Reply
  • Shadow7037932 - Wednesday, August 17, 2016 - link

    For starters, it would be important for determining optimization for wear leveling. Second probably has to do with gate voltage differences and such between the different NANDs. Reply
  • Billy Tallis - Wednesday, August 17, 2016 - link

    It's mostly a matter of how robust the error correction is. Most controllers that don't have LDPC or similar won't advertise support for 16/15nm TLC even though they could actually interact with it given the right firmware, because they wouldn't be able to deliver acceptable reliability without stronger error correction. The industry consensus is that LDPC is good enough for 15nm planar TLC and that 3D QLC is expected to be comparable in reliability. Some controllers (eg. Maxiotek 81xx) that don't have LDPC are intended for use with only 3D TLC but planar or 3D MLC. Reply
  • TheWrongChristian - Wednesday, August 24, 2016 - link

    I've always wondered why the error correction isn't handled on the FLASH itself, rather than in the controller. That way, the die geometry and error correction in use would be irrelevant.

    Of course, having it on the controller allows the error correction to be shared, optimizing die area, but is the error correction really that big? An LDPC implementation is probably noise in terms of transistors and die area, and it simply makes more sense to put it at the FLASH package level.

    What am I missing?
    Reply
  • Arnulf - Thursday, August 18, 2016 - link

    QLC is the dumbest idea ever.

    TLC is bad with regards to reliability versus capacity (SLC and MLC are tied in this regard) and things spiral downwards quickly from there on.

    Reliability (as in: the voltage interval that describes one cell state) halves for each bit added (so it goes down exponentially) whereas capacity only increases by one bit for each bit added (duh).

    Moving from SLC to MLC halves the reliability and doubles the capacity (= same ratio as SLC).

    SLC/MLC to TLC means 1/4 the reliability and 3 times the capacity (= 0.75% ratio of SLC or MLC).

    SLC/MLC to QLC means 1/8 and 4x (= 0.50% of SLC or MLC).

    Things get progressively worse as you move away from MLC (two bits per cell). Stick with SLC if you need absolute performance and go with MLC if you need more capacity.
    Reply
  • ddriver - Thursday, August 18, 2016 - link

    I wouldn't' buy anything below MLC endurance. I am also not very keen on nand produced below 40nm either. That being said, QLC might have its applications, for data that is going to be stored long term without modifying it, but then again the drop in endurance is approaching a dangerous level. They should focus on other technologies with more potential for density instead of pushing nand this way. Reply
  • Vatharian - Thursday, August 18, 2016 - link

    It has been known since the beginning that NAND would evolve into this direction, both Intel and Micron mentioned this when introducing proof-of-concept SSDs. Transition to vertical stacking eases the blow of reduced reliability, but even for home users QLC and beyond has use - there are data rewritten almost never. If QLC proves to be suitable for long term storage, it may even be that missing piece of technology that kills hard drives in mass market. For other scenarios, even in SOHO environment TLC is better suited, think of 5-20 workstation office, with regular daily, or weekly backups, QLC may be too strained even after year, while TLC will still be perfectly fine during expected lifetime of solution.

    In enterprise/data center every form of flash and memory technology has it's place, from HDDs, to rampools, and everything in-between, SLC to QLC. Just not every system needs all of them.
    Reply
  • Daniel Egger - Thursday, August 18, 2016 - link

    > SLC/MLC to QLC means 1/8 and 4x (= 0.50% of SLC or MLC).

    It's actually far worse than that, about one magnitude per step just considering the amount of P/E cycles. But the important question is a different one: What will happen if the drive is used as cold storage rather than being constantly powered: If the device is online than it's possible to keep track of bits drifting away and refresh them but what would you do in the cold storage (worst case: WORM) case?
    Reply
  • wumpus - Thursday, August 18, 2016 - link

    Use of 3d SSD changes things a bit, especially with "DRAMless" SSDs. If you can use a 3d trench in "SLC mode", you can keep a [moving] buffer that takes the endurance abuse [better] and only writes things out to TLC (or QLC) when needed. *Hopefully* this keeps write multiplication from killing your SSD.

    I think that QLC really is just a marketing checkbox for this thing. I find the whole idea of software controlled memory silly, and would rather see the thing read and write with nothing but pure hardware and let the software control the overall management. If you are running two ARMs, best guess is that it is all software and that switching from TLC to QLC is little more than a few lines of software (at least if you started out assuming that, stretching from a "no more than TLC" assumption is a different story).
    Reply

Log in

Don't have an account? Sign up now