Intel: Innovating at the Edge
The Ivey Business Review is a student publication conceived, designed and managed by Honors Business Administration students at the Ivey Business School.
Innovation Since Inception
Founded in 1968 by Gordon Moore and Robert Noyce, Intel quickly made a name for itself as a world leader in computing technology. Just three years after inception, the company produced a chip that compacted the power of a 3,000-cubic-foot computer into a device smaller than a fingernail. This invention made possible the creation of the first personal computer (PC). More than 50 years later, Intel is the dominant manufacturer of central processing units (CPUs), which operate as the proverbial brain of the computer. At its peak, Intel held 82.5 per cent of the CPU market.
The Rise of Fabless Companies
While Intel has long held a leading position among CPU manufacturers, competitors’ improved processing power has reduced this competitive edge. As an integrated device manufacturer, the company has to maintain the technological lead on both the design and manufacturing of CPUs. Aided by semiconductor foundries like Taiwan Semiconductor Manufacturing Company, fabless manufacturing—the process of designing the microchip but outsourcing manufacturing—is becoming increasingly affordable and prominent.
One example of such a fabless chip company is Advanced Micro Devices (AMD), which spun off its manufacturing capabilities into a separate company, GlobalFoundries, in 2009. With the launch of its Ryzen chipsets, AMD significantly bridged the gap in performance, providing a lower-priced alternative to Intel in the traditional consumer PC segment. Coupled with the delay in Intel’s next-generation manufacturing capabilities, this gives both manufacturers and consumers significant cause to adopt AMD products.
Case in point: HP will adopt AMD processors in up to 30 per cent of its consumer PCs. Similarly, in the Data Center segment, AMD is set to launch its next-generation EPYC chips in 2019 as a higher performance and lower Total Cost of Operation (TCO) option to Intel’s Xeon series processors. In order to compete with AMD’s lower TCO, Intel has been offering up to double-digit percentage discounts for its Xeon processors. Consequently, it is imperative that Intel look for new areas of use for semiconductors in which its manufacturing capabilities will provide it with a competitive advantage.
One of the highest-growth segments in semiconductors is the Internet of Things (IoT), which Intel competes in via its aptly-named Internet of Things Group. The term can be traced back to an internal presentation at Procter & Gamble in 1999, recommending the use of electronic identification in its supply chain. Today, it refers to a network of devices that are capable of collecting, processing, and sharing data with the network.
IoT devices are traditionally “dumb” objects that have had basic computing and networking functionality built in to allow them to communicate among each other. For example, Nest Labs produces a number of smart home devices such as the Nest Temperature Sensor, which uses basic networking functionality to communicate with a central thermostat in order to regulate temperature. When considering all related products and services, IoT is estimated to be a $520-billion industry by 2021, while the number of IoT devices is projected to reach 20 billion by 2020.
Fifth-Generation Networks (5G)—the next generation of mobile internet connectivity—can be differentiated from current 4G networks by their extra bandwidth, which is made possible by transmission over the super-highfrequency spectrum. In addition to having peak speeds nearly 20 times faster than that of 4G, the density of the network allows it to operate with near-zero latency, significantly improving responsiveness and reliability. The amount of data generated and transmitted across networks will inevitably grow with the number of connected devices. Consequently, current 4G networks simply do not have the bandwidth to keep up, especially given the network congestion caused by the communication of IoT devices. As such, the development of widespread 5G networks is a precursor to the widespread adoption of IoT.
Bringing 5G up to scale requires significant capital expenditure and collaboration among multiple stakeholders. Regulators and government entities still need to determine policy on spectrum bands, the frequencies on which the 5G network will operate, as well as security standards. However, given the widespread commercial benefits of the improved technology, telecommunications companies have announced that they plan to begin rolling out their 5G networks as early as this year.
Intel currently has a variety of investments that shape its future avenues for growth, ranging from manufacturing Graphics Processing Units (GPUs) to investing in the development of artificial intelligence (AI) software. GPUs are specialized electronic circuits that are capable of performing a narrow range of calculations extremely quickly and efficiently. Although they are most commonly known for powering graphically-intensive video games, they have a myriad of commercial applications, one of which is the processing and training of AI algorithms.
Although Intel is a leading manufacturer of CPUs, it does not have the same expertise in developing discrete GPUs. This is an investment outside of Intel’s core competencies and a market that is already dominated by Nvidia and AMD. Similarly, Intel faces fierce competition in the market for AI software. Software development requires significant human capital, a resource that is being increasingly consumed by players such as Google, Amazon, Microsoft, and Facebook. Through their existing software services, these competitors already have access to large datasets that can be used to create robust AI models.
Intel has invested in the IoT market, highlighting it as a key business segment. The IoT Group currently contributes to five per cent of total revenues and targets the entire IoT value chain. Intel offers a broad range of hardware, software tools, and ecosystem programs, which again extends past its core competency of creating robust microprocessors. With this model, Intel faces stiffer competition than if it were to focus solely on delivering IoT chips.
A critical feature of IoT chips is their ability to perform edge computing. This is a decentralized form of processing data at its source rather than transferring the information to a central cloud server for computation. The major microprocessor manufacturers appear to have neglected to develop specialized chips, choosing instead to repurpose mobile or computer chips for IoT usage. However, these chips are often inefficient. Consequently, Intel can derive value by using its integrated expertise to design and build IoT-specific chips.
Clearing the Fog
Fog computing is an extension of the concept of edge computing, in which computing resources are shared among different smart objects in the network. This allows for better allocation of computing power, increasing both processing speed and capacity. Intel has already begun collaborating with other IoT players such as Microsoft and Cisco through the OpenFog Consortium, which funds research into fog computing.
To position itself for success, Intel should begin revamping its chips. Intel’s current product offering in the IoT segment is the Xeon D-2100 processor series. Released in February of 2018, the Xeon D series are based on the Skylake microarchitecture already found in a variety of PC and server chips. In order to gain a foothold in the edge and fog computing markets, it is important that Intel make a low-TCO chip. This requires a granular redesign. Microarchitecture plays an important role in the energy efficiency and processing power of chips. As such, recycling Skylake in the edge computing chips is not a meaningful strategic step forward. To establish itself as a leader in IoT technology, it is crucial that Intel make a commitment to reducing the power usage of chips. Fog computing derives value from the shared use of computing power; each node needs to have a low TCO. Designing a new microarchitecture and chip takes substantial investment but is necessary to create a leading position within the fog computing market.
If Intel is able to successfully develop a market-ready chip, it stands to benefit from several use cases of fog computing in numerous industries.
A study done by the World Economic Forum in 2017 found that digital transformation in mining, including IoT and connected systems, could provide more than $425 billion of value to the mining industry by 2025. Specific applications of IoT in mining include predictive equipment maintenance, remote monitoring and control of equipment, and health and safety analytics. Devices in this industry can generate large amounts of data that need to be processed in the cloud. However, because most mines operate in harsh conditions hundreds of feet underground, connectivity to the cloud becomes an issue. By using fog computing nodes placed locally within the mine, most data can be processed on-site in order to drive real-time analytics.
Oil and Gas Pipelines
By implementing IoT, pipelines can move from a reactive approach to a predictive one. For example, sensors placed along the pipelines provide real-time metrics such as flow rate and pressure. Pipeline monitors can then watch for signs of major damage and act preventatively. Because data is processed locally, reliability improves drastically. In an industry plagued with health and safety concerns, these analytics could greatly improve safety and prevention.
Modern hospitals use multitudes of connected devices to monitor, analyze, and treat patients. In an OpenFog Consortium report, the organization outlines the potential of using fog computing in patient monitoring. One commonly used device is the patient-controlled analgesia (PCA) pump, which allows patients to self-administer pain medication. Accidental PCA overdoses kill up to one thousand U.S. patients a year, an issue that could be solved using fog computing. Integrating fog nodes across medical devices such as blood oxygen and respiratory rate monitors with PCA monitoring would allow the system to process patient data and determine whether administering the next dosage of patient-initiated medication would be fatal. With the low-latency nature of local processing, realtime insights into the patients’ health could be transmitted to medical providers and abnormal events could be reported quickly.
An Integrated Path Forward
Mining, pipelines, and healthcare are only three use cases in the world of IoT that could benefit from fog computing. Given the immense growth expected of the IoT market, Intel must find itself a place in the value chain. To make this a reality, Intel should develop a line of original IoT chips that can support edge and, by extension, fog computing. While its existing business segments face increased competition, the opportunity present in the nascent IoT industry will revitalize Intel’s growth.