Technology always needs managing. But in terms of system and machine management for the modern era, it’s important that we realise we are now embracing a new era of Internet of Things (IoT) and smart machine deployment, especially when and where those devices fall into the category that we now formally define as edge computing.
So then, what device management decisions do we need to make?
Software application developers working in this space will want to get hold of tools to a) manage and control IoT devices in rapid and highly functional ways but also b) look to get hold of an automation advantage – where possible – to push some of those tasks towards the autonomic backbone that we are now all developing.
In an effort to assess where the IT industry is at with this challenge, we spoke to a couple of organisations with a vested in interest in this space. The central question we are posing here is what issues come to the fore when we look at the question of being able to test, validate and ultimately manage edge IoT device software?
“Being able to test and validate software on the exact hardware selected for deployment is one of the critical innovations from modern software development,” said Jason Knight, chief strategy officer, OctoML. Knight’s company is known for its work that hopes to make Artificial Intelligence (AI) more sustainable through efficient model execution and automation to scale services and reduce engineering burden.
Knight suggests that being able to test and validate software on the exact hardware selected for deployment is one of the critical innovations in modern software development.
This, he says, is because machine learning hardware and libraries often vary widely between development, testing, and deployment environments, which makes ensuring ML powered applications hit their performance and accuracy targets significantly more complex than typical application logic. He suggests that this is why it’s so critical to have reliable edge device management so that developers can test, debug, and validate their machine learning applications on actual hardware.
“This becomes even more important when deploying on specialised targets such as embedded hardware and machine learning accelerators. Bringing non-standard hardware into easier test/deployment loops, and current advancements in machine learning acceleration, are key ingredients for lower complexity in these traditionally complex domains,” added OctoML’s Knight.
Red Hat(s), plural
Red Hat wanted to provide more than one opinion on this subject, so far be it from us to hold the open source platform purists back. According to Erica Langhi, senior solution architect, Red Hat… interconnectivity and interoperability across sensors and edge components are key factors of a modern and reliable edge computing architecture.
“As such, open APIs and open source are a critical foundation as they provide the transparency and flexibility to help make sure components interact seamlessly with each other,” she said.
Langhi reminds us that there are some big factors at play here and that we need to think in terms of platform and operating system level technologies if we are to move towards a point of competent edge and IoT device management.
“Key considerations for edge architectures include having a lightweight container orchestration platform that extends to edge sites and an operating system that provides consistency between the datacentre and the edge nodes… and using lightweight patterns like serverless, to allow developers to extend application deployment to the network edge,” said Langhi
Her words are echoed by her colleague Salim Khodri, edge go-to-market specialist for EMEA at Red Hat. He says that when it comes to deployment scenarios like intelligent agriculture systems, facial recognition robots or smart parking systems – to address the near real-time needs of these use cases necessitates moving AI inferencing from the cloud to the edge in order to benefit from edge computing’s low latency and network bandwidth savings.
“While edge AI has very interesting potential there are some special challenges in this space. Running AI applications at the edge will require in-depth controls for security and scalability to give peace of mind that the insights you collected for data-driven decision-making are always available and protected,” said Khodri.
Hybrid, centralised management
So he suggests, we can expect that when considering the move toward edge AI, many organisations will be looking for a hybrid cloud platform with centralised management and security that can help to build, run, manage and scale AI deployments across thousands of servers at the edge. Yes okay, that does sound like Red Hat.
“This [above technology proposition] supports the ambition to more easily bring AI to networks of retail stores, warehouses, hospitals, factory floors etc. as well as help to simplify operations. It’s another reason to seek centralised management for edge relates to cost reduction which in turn makes AI more accessible and practical at all locations,” explained Khodri.
Are we in control of edge device management then? Yes, but – and it’s a big but – we are in control from more than one driving seat and from more than one perspective. The edge device best practice handbook doesn’t have one set of all-terrain road tyres yet, let alone a map or an eco-efficient carbon-neutral engine.
Rather like cloud computing itself, which arrived without the security provisioning, observability tools and orchestration controls that we now consider to be de facto standards, edge computing has perhaps burgeoned without a commensurate agreement on device management. Let’s do that next then, like really, right?
Image credit: Adrian Bridgwater