Internet of Things chip design sounds like a simple topic on the face of it. Look deeper, though, and it becomes clear there is no single #IoT, and certainly no type of chip that will work across the ever-expanding number of applications and markets that collectively make up the IoT. Included under this umbrella term are sensors, various types of processors, a growing menu of on-chip and off-chip memory types, and a long list of I/O and interface IP, chips and chiplets. There also are different approaches emerging for packaging these devices, including custom ASICs in the cloud, various SoCs, 2.5D chips for the networks and servers, and fan-out wafer-level packaging for MEMS and sensor clusters. In addition, there are safety and security considerations involved in developing chips that go into increasingly connected cars, medical devices, and industrial control systems. That adds to the complexity and cost, as well as the time it takes to design, validate, verify and debug these devices. “IoT is kind of a catch-all for many, many different applications,” said Ron Lowman, strategic marketing manager for IoT at Synopsys. “Actually, what you see is a lot of designs that are intended to cover more than one application, so sometimes you have supersets—things that may be overdesigned for one particular application but can handle several. That’s one thing that’s different. If you look at the mobile space, or many of the other spaces, they’re more optimized for one specific application and then re-used, where IoT has more of a general-purpose version and then in some cases, they’ll do specific designs and try to re-leverage them for the other markets. The other thing that’s different is, because it’s more broad and general-purpose sometimes (which is changing), volumes are increasing. So we’re being able to see more specific designs for specific applications. You saw that in metering, for instance, as much more targeted.” Put simply, these are not the scaled-down commoditized designs that were expected several years ago.
“At the high level, IoT is about connecting things to the Internet that weren’t in the past, and then enabling services based on that,” said Jeff Miller, product marketing manager at Mentor, a Siemens Business. “There’s a lot that goes into it. In order to realize that, you have this a three-tier architecture—the servers and the cloud component, the gateway component that interfaces between the IoT edge devices and that cloud, and the IoT edge devices themselves that make up that interface between the Internet and the real world.”
Redefining the edge
What constitutes an edge device isn’t always clear. Three years ago, any talk of an edge device invariably circled back to a smart watch or a connected home appliance, which was supposed to send data collected by a number of sensors through some electronic gateway to the cloud for processing. While that seemed like a logical progression for technology at the time, the reality is that so much data is being generated by sensors that it’s energy-inefficient to send everything to the cloud for processing, and in some cases it’s too slow to send anything.
This is where designing chips for the IoT gets really confusing. On one hand, these devices need to be inexpensive. But in some markets they also need to be more reliable, more secure, and conform to a number of standards such as ISO 26262 in automotive, or the OMAC and OPC industrial standards for the IIoT. That adds to the cost and the time it takes to bring these devices to market. On top of that, particularly for mobile electronics, these systems need to be extremely low power to conserve on battery life. That requires complex power management, which further adds to the price and the complexity. And they need to have enough performance to get a job done, whatever that particular task may be.
“You have to be innovative everywhere,” said Dipesh Patel, president of the IoT Service Group at ARM. “That includes power, real estate and cost. Today, we’re seeing chips coming out at 55nm and 40nm. Moving those to 40nm and 28nm will provide more cost savings. The cost is coming down as MCUs take advantage of Moore’s Law. To be more secure, they will have to migrate to 32 bits, which also will allow them to shift to more modern areas.”
Other strategies for cutting costs are entering into this, as well. One involves packaging multiple sensors into a cluster to gain economies of scale. The thinking behind this approach is that it is less expensive to build that cluster, even if not all the sensors are deployed, than to build them individually.
“The direction is toward a more standardized sensor unit,” said Brandon Wang, engineering group director at Cadence. “This turns it into more of a platform, and you can get it tomorrow rather than a dedicated sensor where you do a custom design. So every system will have sensors, but what if you design a sensor hub where you optimize those sensors?”
As Mentor’s Miller noted, the edge is where a lot of the IoT-specific design challenges come into play. “At the IoT edge, that’s where we’re really seeing a lot of focus on making those devices as fit-for-purpose as they possibly can be. There are a number of things that drive things in that direction, and it revolves around the fact that these are really high-volume products. They’re connecting many billions of things to the Internet. They’re going to have to be inexpensive. They’re going to have be out in the field. And so they’re going to have to interact with the physical side and power requirements that have to be met. And they have to interface with the real world through sensors and actuators, and those involve high voltages, multi-physics, and things like MEMS and photonics and other physical domains besides just electronics. There’s really a lot that goes into making these devices. And they tend to be fairly well customized for the particular application area because they have to meet the power and physical size envelope, and have high volume requirements, and really have to drive cost down.”
Fig. 2: Smart LED floodlight. Source: Elitesemicon.com
The middle ground
When Cisco first began making predictions about billions of devices that collectively would comprise the IoT, the assumption was that data would be sent from an edge device through a gateway to the cloud. Raw data would be processed, analyzed, and relevant information would be passed back to the edge device through a gateway.
There are a couple of problems with that scenario. First, I/O consumes more power than processing some of the data locally, which in the case of a mobile device affects battery life. As a result, a variety of mid-level servers have been developed to connect the edge devices with the cloud. That provides a second, and sometimes a third and fourth level of filtering.
“We now have IoT applications on the edge where minimizing power is more important than the historical focus we have on maximizing performance,” said Ian Dennison, senior group director for Cadence’s Custom IC & PCB Group. “That changes the design focus. Clearly, power shutoff and battery voltage frequency scanning will be still necessary steps, but hierarchical computing and threshold computing can be applied if you can drop the frequency.”
Second, there is way too much data being generated by sensors to send everything to the cloud, which has given rise to the mid-level compute platform, which fits somewhere between the cloud and the edge. It could be a smart or simple gateway, an edge server or a fog server.
“When IoT first emerged, the idea was that you would collect everything and send it through a gateway to the cloud,” said Dean Freeman, vice president of research at Gartner. “But the amount of data is enormous. So not only don’t you have to send everything, you don’t have to compute everything. If all you want to see is the outlier data, then you need less data transmitted. And why go all the way to the cloud? Depending on the mesh network, the signal and the connection, that could create latency of multiple minutes. The way to solve that is to push native intelligence and some AI down to the edge.”
But what exactly does an edge server/gateway look like? So far, there is no clear answer to that.
Fig. 3: Mentor’s customizable IoT gateway. Source: Mentor, a Siemens Business.
“The middle ground is the vaguest of the three levels,” said Freeman. “It will likely be a 64-bit architecture. But it could be nothing more than a ruggedized PC. Dell has been building what are essentially ruggedized server racks packaged in plastic to keep them sealed on the factory floor. It also could be a standard PC that’s used as a gateway.”
What is clear, though, is these will require customization for an individual organization’s needs, and some flexibility because those needs may change. This is part of the reason there is such a scramble for new types of memory—MRAM, phase-change memory (PCM), ReRAM, 3D XPoint, among others. While the basic architecture remains the same—moving data back and forth between memory, the processor and I/O, according to the classic von Neumann compute model—variations within that scheme are beginning to emerge.
The server
The cloud-based server is the third stage of IoT computing, and this is where some of the biggest shifts are occurring today, as well as the largest investments. Unlike a corporate enterprise, which generally works with consistent data types and predictable quantities, the data generated by edge devices is both inconsistent and voluminous. That data may be used as part of the pattern recognition in artificial intelligence, for example, or it can simply be sifted for aberrations that don’t fit within a Gaussian distribution.
To deal with this, chipmakers and systems companies have begun devising completely new architectures for logic and throughput, in same cases moving some processing into the network or even into various types of memories.
Nick Ilyadis, vice president of portfolio technology for Marvell‘s Strategic Planning Group, pointed to software-defined storage as one of the big changes. He said two trends underway in the cloud are hyperconvergence and hyperscaling.
“With hyperconvergence, you add compute and storage in a vertical configuration,” Ilyadis explained. “With hyperscaling, you add more units to the network and also add more bandwidth. So hyperconvergence is up, hyperscaling is out.”
Making both of those concepts work, particularly with millions of virtual machines, requires changes throughout the cloud architecture, he said. So NVM Express, the protocol for solid-state drive communication, has given way to NVM express over Fabrics (NVMeoF). And SSD has given way to network-optimized SSD for in-network memory. “What’s changed is better synchronization with SSD drives and applications,” said Ilyadis.
Memory is one of key areas for innovation. While DRAM and SRAM remain the go-to technologies, improvements are slowing. Craig Hampel, chief scientist at Rambus, said that historically, DRAMscaling provided a 35% improvement in cost per bit from scaling, but by 2010 this had dropped to 25%. That has led chipmakers to look at a number of new memory types, including #MRAM, #phasechangememory, #ReRAM, as well as load-reduced DIMM (LRDIMM), non-volatile DIMM (NVDIMM), storage-class memory DIMM ( #SCMDIMM ), and in the future, caching DIMM.
But it’s not just about the memory type. It’s also the ability to physically locate the memory in different places than it has traditionally been used, which basically shakes up the traditional von Neumann architecture.
“There are three essentials of a memory solution,” said Hampel. “The first is that it needs to satisfy functional needs of memory for block size and cost. Second, it a ubiquitous interface. Wherever there is a hole, there are places you can put storage, but for some of the existing memory types the latency and block size are too high. The third thing is that you need software awareness to be able to take advantage of that memory.”
Fig. 4: Microsoft Azure cloud-based architecture for health monitoring. Source: Microsoft
Security
Hanging over all of these devices are growing concerns about security. What makes it particularly challenging is that all three levels of devices are likely to interact at some level through the Internet as more of these devices are connected.
Security, like power, needs to be addressed at the architectural level. And the more components involved, the harder it is to secure an electronic device or system.
“Putting everything onto a single chip reduces hacking,” said Steve Mensor, vice president of marketing at Achronix. “At the board level, you can probe signals and deprogram communications. If it’s on-die, it’s much more difficult to get in. This is particularly important in automotive because if it’s self-contained, it’s difficult to get into the chip even when that chip is sending out information.”
Even where customers are willing to pay for it, security will never be complete or last forever. It will require constant updates, and those updates may add vulnerabilities to other devices.
“In mobile devices, there is always an issue of device security,” said Bernd Stamme, vice president of field applications engineering at Kilopass . “The set-top box industry has developed an elaborate system of keys. If you break a key, you can get free movies. But the way they’ve created it, only one person can use that key. The key is established as the root of trust, and then you do updates based upon key provisioning. That establishes identification and base authorization. But it’s also more than just an ID. That also contains manufacturing codes. Updates depend on when it’s manufactured. This is very tricky to manage, and we’re starting to see it in device classes like door openers and window openers.”
Security affects every part of a design, from IP that is added to those devices to the source of that IP and the delivery of manufactured chips. The IP itself can be stolen and corrupted if chipmakers are lax in their internal security.
“IP data can be stored anywhere,” said Ranjit Adhikary, vice president of marketing at ClioSoft. “How you manage that and how you link to it is key. You can add encryption code.”
This is all part of the chain of trust that need to be built around chips, and that includes everything from how IP is stored and managed to how data is shared. “It’s not just IP blocks,” said Adhikary. “It’s best practices, flows and scripts, as well.”
Mentor’s Miller agreed. “Assuming you got the chip you expected, the supply-chain security involved is all good, then you’ve got to deal with the challenge: ‘How does this play a part in a secure system?’ Ultimately, security is more of a process than a product,” he said.
But even with the best practices implemented and security built into devices, in the IoT those devices have to work with other devices. This is one of the issues that came to light last fall with the Mirai distributed denial of service attack, which turned unsecured devices into a botnet army.
“Surprisingly, there are a lot of chips that are being designed with almost no security, or they think they going to leverage it with just software,” said Synopsys’ Lowman. “Over the past year, probably more in 2017, this has become a much more important topic because of the breaches that we’ve seen. People are only implementing security if they have to, because there is a cost associated with it. What’s driving them to do it is their end customers saying, ‘We have to have this type of security certification,’ or they’ve been breached themselves and want to protect from that in the future. Those are probably the two most driving needs for security. But you have systems in place today – for instance, even telematics or vehicle tracking — that are completely unsecured still. They’re starting to put some encryption into that via software. But more recently, we’re seeing with these recent breaches the fact that’s not good enough.”
Conclusion
The IoT is not a single thing. It is a collection of many different technologies, services, and markets, all connected to the Internet. While the three main levels within the IoT are more or less defined—edge, middle and cloud—there will likely be changes to what fits where over the next few years as the IoT begins to take shape. That will affect designs, cost, and scalability of these systems.
There are many obvious advantages to connected things. A smart sprinkler system can shut off when it’s raining. A connected car can alert a driver to an accident ahead and provide alternative routing. A smart streetlight can adjust to traffic and neighborhood preferences. And a smart machine can send out a message that a part is about to fail before it interrupts an industrial operation.
There also are disadvantages. If everything is connected, data can be stolen, machines can be co-opted, and there is less predictability in how devices will behave as an updated machine interacts with another machine that has not been updated—a problem that every computer user is aware of when they update their operating system and their printer driver no longer works. But what is clear is that the IoT is coming, and no matter what problems exist, all of these issues need to be solved at every connected level. Building an IoT chip is just the first step in what promises to be a huge opportunity and headache for years to come.
I Like to add one more important thing here, The IoT Chip Market is expected to be around US$ 21.20 Billion by 2025 at a CAGR of 13.2% in the given forecast period.
ReplyDelete