Are AI tools like ChatGPT friend or foe for cloud developers?

The cloud AI train has left the station, with Microsoft recently announcing a new $10 billion collaboration with ChatGPT creator OpenAI and Google now apparently working to build a ChatGPT rival. But what does deeper artificial intelligence (AI) integration in applications and the cloud mean for developers? And is it necessarily a good thing? 

The answer depends on who you talk to.

AI has been part of cloud infrastructure for a while now. Sid Nag, vice president in the Technology and Service Provider group at Gartner, told Silverlinings the technology has primarily been used for cloud provisioning and orchestration, especially in data centers. For instance, AI has been used to help set command parameters to ensure architects don’t make critical mistakes that could accidentally bring an entire data center down.   

But he noted there seem to be two new ways AI is making its way into the cloud. First, with the advent of query and response engines like ChatGPT, AI can not only be used for things like search functions but also to write code which can be embedded in applications.  

“From a devops perspective, that is a pretty powerful capability,” he said. Until now, for instance, in order to write search queries and commands, a developer would have to know a specific programming language and the required construct of the relevant command. Now, with ChatGPT, “I can literally get that in a jiffy.” 

Just ask ChatGPT

Kenneth Wenger, senior director of research and innovation at CoreAVI and CTO of Squint AI, made a similar observation in a recent interview with Fierce Electronics. Wenger said a software engineer told him “that he no longer has to remember syntax” because he can just ask ChatGPT.  

The other area in which Nag sees AI being used more extensively in the cloud is in the infrastructure-as-a-service substrate. There are already serverless offerings like AWS Lambda, which automatically allocate data center resources to make it possible for developers to write code without having to issue commands to get the necessary compute power. But Nag said deeper AI integration could make the resource allocation process even quicker. 

“So, if a developer has written that [code] before, another developer writing an application with a similar intent can then call that serverless functions capability and that serverless functions capability can then say, ‘Oh I’ve seen this code base before. I know how much to allocate right away, I don’t have to go and redo all my heavy lifting for this guy,’” Nag explained. 

The dark side of AI tools 

But AI isn’t all upside. For instance, Nag said the use of a tool like ChatGPT for code generation raises the possibility that malicious code bases could end up incorporated into apps. There are also ethics issues related to whether the data used to train AI models is proprietary (i.e., belongs to someone else) and whether developers are actually “allowed to use this information,” he said. 

Additional ethics questions can arise when AI is applied to enable certain people or entities that researchers and developers deem un-ethical. This particular issue reached a boiling point in 2018, when Google Cloud chief AI scientist Fei-Fei Li exited the company after a dispute over whether the technology should be used to help the U.S. military analyze drone footage. 

For those worried that ChatGPT and other AI tools might lead to even more layoffs in the tech sector, it doesn't necessarily seem that's the case. 

Nag’s takeaway: “Cloud providers should be mindful that ChatGPT is just one example of many innovations in AI and should avoid being seduced by the hype surrounding it given the technology limitations and shortcomings. It’s an interesting technology but it should be used with caution.” 


We love to get mail. Write us a letter here.