In contrast to traditional cloud architectures, fog computing and edge computing are network and system architectures that gather, analyze and process data from assets deployed at the very edge of the network. Fog compute and edge compute share similar goals, including the reduction of network latency, minimizing the amount of data sent to the cloud for processing and optimizing system response time, particularly in remote mission-critical applications. Indeed some consider the two architectures to be synonymous, but there are fundamental differences between the two.
In both types of architecture, the data itself is generated from the exact same source - a physical asset or ‘thing’ of some kind, whether that is a sensor, pump or a motor. These devices perform a real-world task such as detecting information about the world around them, pumping water, or switching electrical circuits. Even though both fog computing and edge computing involve moving intelligence and processing capacity closer to where the data originates, the most fundamental difference between them is precisely where that intelligence and computing power is located.
- Fog computing drives intelligence down to the local area network (LAN) level of network architecture, processing data in either a fog node or IoT gateway.
- Edge computing meanwhile drives the intelligence, processing power, and communication operations of an edge gateway or appliance directly into the devices themselves such as PACs (programmable automation controllers).
Fog computing therefore involves multiple layers of complexity and data conversion. It relies on numerous links in a communication chain to move data from the physical world of the ‘thing’ into the digital world. Edge computing offers a simplified version of this communication chain, thereby reducing possible points of failure.
Fog compute can be effectively used in combination with edge and cloud compute. Key real-world use cases of fog computing include smart electrical grids, the Internet of Things, in particular autonomous vehicles and manufacturing in the IIoT (Industrial Internet of Things).
Graeme Wright, CTO for Manufacturing, Utilities, and Services at Fujitsu UK described how this works to improve efficiency in an interview with Tech Radar.
“Edge computing may be used to control the device that is being monitored by a sensor, and only send data back when something changes,” says Wright. “This could then be complemented by fog computing, to alert other sensors or devices of the status change, and take appropriate action.” Cloud compute can then be put into use to run analytics on the complete system, alerting staff to any necessary maintenance issues. “This setup can not only provide real-time analysis of the data, but also lower data storage, and more importantly improve efficiency,” notes Wright.
Cisco is commonly associated with the term fog computing who indeed originally registered the name “Cisco Fog Computing”. Various industry bodies have since arisen to help define fog computing and encourage its use, including the OpenFog Consortium. The group was founded in November 2015 by ARM, Cisco, Dell, Intel, Microsoft and Princeton University to “accelerate the adoption of fog computing and address bandwidth, latency and communications challenges associated with IoT, 5G and AI applications”. The Consortium is about to hold the Fog World Congress in San Francisco from October 1-3, 2018.
One of the key values disseminated by the OpenFog Consortium is that “the immersive fog can address many challenges that Cloud alone cannot effectively address”, including the processing of real-world data, supporting time-critical local control, connecting and protecting the many types of resource-constrained devices on the market, and overcoming network bandwidth and availability issues. The Consortium predicts that “over time, cloud and fog will converge into unified end-to-end platforms offering integrated services that combine resources in the clouds, the fogs and the things”.