Enterprise software developers prepare for generative AI’s ‘productivity revolution’ – SiliconANGLE
Daniel Saroff tells of a client who is going through a modernization project that involves rewriting applications created in Pick, an operating system and programming language built for mainframes in the 1960s, to run on modern infrastructure. The company found that the ChatGPT generative artificial intelligence model knew enough about the now-obscure Pick environment to aid significantly in the process.
They were using generative AI to find errors, write documentation and even migrate to Visual Basic, said Saroff, the group vice president of consulting and research at International Data Corp.
Joe Reeve, an engineer at behavior-tracking software firm Amplitude Inc., noticed earlier this year that customers were submitting code written in the companys proprietary query language that had come from an engine trained in the Generative Pre-trained Transformer-4 multimodal large language model.
It turns out that GPT-4 had already learned from the internet how to use our tools, he said. It’s already out there helping customers build things.
For all the gee-whiz uses people are finding for OpenAI LPs ChatGPT and other generative AI models that are proliferating across the internet, nothing may ultimately be more disruptive than their ability to generate code. In the process of ingesting the content of billions of web pages, LLMs suck up a fair amount of programming instructions and related documentation and spit it out in ways that amaze even veteran developers.
Many have marveled in online forums and on social media at the quality of code generative AI engines produce. While not yet very effective at innovating new solutions, GPT-powered assistants such as Microsoft Corp.s Copilot and Amazon Web Services Inc.s CodeWhisperer have been widely adopted for their facility at automating mundane programming tasks.
The tools are proving to be even more valuable in such areas as software testing because of their ability to churn out a dozen different test scenarios in a few seconds. An IDC survey last spring found that developers identified software quality testing and security and vulnerability testing as the two greatest benefits of generative AI. Writing code came in third.
GPT models have also shown promise in translating software written in old languages such as Cobol to more modern dialects that have a broad base of developers available to support them.
This and other trends in generative AI will be explored Tuesday and Wednesday, Oct. 24-25, at SiliconANGLE and theCUBE’s free live virtual editorial event, Supercloud 4, featuring a big lineup of prominent executives, analysts and other experts.
By all accounts, the impact of generative AI on software development will be profound. A recently published survey by GBH Insights LLC, which does business as GBK Collective, found that 78% of companies expect to use AI for software development within the next three to five years.
Based on a survey of 2,000 information technology professionals, Freshworks Inc. estimated that U.S. companies could save over $15,000 per IT employee each year by using AI to automate repetitive tasks. Gartner Inc. forecasts that more than half of software engineering leader role descriptions will explicitly require oversight of generative AI by 2025.
An IDC study in May found that nearly 40% of IT executives said generative AI will allow us to create much more innovative software. Large enterprises said they expect the technology will help them overcome the chronic skill shortages problem, while smaller firms expect it to reduce their spending on software-as-a-service applications that they will be able to build on their own.
We are seeing the onset of a productivity revolution, said Prasad Ramakrishnan, Freshworks chief information officer.
I think this is one of the most exciting things that will happen in our careers, said Graeme Thompson, CIO at Informatica Inc. Noting that few people had even heard of generative AI a year ago, Graeme marveled at the speed with which it has captivated IT professionals. Much of what weve seen has been incremental, he said. This is sudden.
Amplitude has been using Microsoft Copilot for several months and has seen a dramatic change in productivity, said CIO Chetna Mahajan. The head of engineering told me were reducing time to value by 20% to 25%.
As models steadily improve, they will be able to take on more sophisticated tasks in areas such as testing, debugging and identifying security flaws. That could fundamentally change the role of software developers and hasten the democratization of basic development tasks that has been driven by low-code and no-code tools.
In the old world you had people with ideas and developers who bring those ideas to life, said Sumeet Arora, chief development officer at business intelligence software provider ThoughtSpot Inc. We are moving toward a place where we go from ideas to answers. This will dramatically lower the barrier to everybody being able to use the technology without the need for complex platforms.
The long-term impact could be that IT will no longer be a cost organization, said Amplitudes Mahajan. This is a perfect opportunity to drive revenue and improve customer engagement and retention. Its an exponentially better opportunity for IT to be seen as part of the business.
The software development profession has been at the forefront of automation for years and using tools to speed up and simplify coding is nothing new. Auto-completion and spell-checking algorithms for programmers were first developed in the 1950s. Frameworks, which are reusable, pre-written sets of code that lay down a foundation for application development, have been used for over a decade. Stack Overflow, which enables developers to share code, was established in 2008. Microsoft introduced Copilot more than two years ago. As Ramakrishnan said, This shift is not new.”
What is new is that the scope of the changes generative AI is likely to trigger will have ripple effects that change the way software is built and how users interact with it. It’s going to replace the frameworks that we are using right now, said Olivier Gaudin, founder and co-CEO of SonarSource SA, a developer of code quality tools that does business as Sonar.
A more sweeping change may be in the interface software presents to users. For decades, the way we interact with applications has been defined by menus, wizards and other scaffolding that hides underlying complexity. Developers had to anticipate how the software would be used when building interfaces. Much of that gingerbread wont be needed in the future.
I think this will remove the need for a lot of code entirely, said Informaticas Thompson. Why have a developer create a PowerBI dashboard when the chief marketing officer could just have a conversation with the data? Theres no need for the cost and time of building a user interface.
That moves software development in general up the value chain. Its going change the skill sets that are required for developers, said Amplitudes Mahajan. IT will be more of a center of excellence that makes sure the data is of the high quality needed to feed to the LLMs. We wont be in the business of creating dashboards.
Amplitude engineer Reeve agreed. It allows us to keep our brains focused on the difficult problems without copying, pasting, and tweaking tests, he said. We can stay in the abstract and do the mental math.
The impact AI will have on software development and developers in the long term is currently mostly conjecture. In interviews with people who are active in the field, SiliconANGLE focused on a few key questions that are likely to concern IT leaders over the next three to five years.
How will software development change?
Writing code will undoubtedly be a smaller part of the job while data architecture and engineering will grow in importance. The quality of LLMs correlates with the quality of the data theyre given, meaning that carefully curated data directly impacts the quality of code produced.
Developers will create data models that enable users to ask simple questions, said Informaticas Thompson. Youre not going to have armies of people creating dashboards; youll have armies making sure data models are trustworthy.
Prompt engineering will become a necessary competency. Thats the art and science of creating clearly worded commands for the generative AI engine with the proper specificity and context and refining them to improve results iteratively while minimizing the risk of harmful or biased outputs.
At software asset management firm Snow Software Inc., generating application program interface and schema tests has been reduced from hours and days to literally minutes, said Chief Architect Jesse Stockall. The secret is good prompts. It’s easy to treat these GPTs as fancy search engines, but youre not getting the full power if that’s all youre doing, he said.
Software engineer jobs are going to be at a higher level of abstraction, where instead of managing, reading and writing code all day, we’re going to be managing prompts, said Amplitudes Reeve.
Prashant Kelker, partner for digital solutions at technology research and advisory firm Information Services Group Inc., agreed. We are moving away from a world that is about how to write code and toward a world of owning the AI that continuously optimizes, he said.
Developing secure software, a task more than half of developers find burdensome,is likely to become a higher priority thanks to improvements in generative AI scanning. The things weve seen some of the software capable of doing in debugging is really incredible, said Matt Riley, general manager of enterprise search at Elasticsearch Global BV. Spotting memory leaks and buffer overflows can have serious implications for how security is done.
The need for less code inspection means a greater focus on testing, said ISGs Kelker. Automatically generating independent verification and validation test cases will enable red teams to more effectively test applications before release and tremendously improve the quality of rollouts, he said.
Writing code is the simple process, said Sherard Griffin, senior director of global software engineering at IBM Corp. subsidiary Red Hat. Im more interested in how we can analyze complex systems to understand why an error occurs. Theres going to be more opportunity to work on those back-end processes.
User interface design may become less important over time but a lot of the things in the back end wont disappear anytime soon, said Devavrat Shah, an MIT professor and co-founder of Ikigai Labs Inc., a company that helps organizations operationalize AI models.
No one expects machines to obviate entirely the need for developers to work with code. Generative models are good at solving problems based on what they already know but not inventing new solutions. At the end of the day you can never take the human out, said Freshworks Ramakrishnan. It takes engineering to a higher level, but you will never see engineers replaced.
Who benefits the most?
Generative AI is arguably a tide that lifts all boats in the development world, but some may benefit more than others. Whether those will be junior developers or code ninjas is a matter of debate.
Informaticas Thompson sees efficiency improvements all around with a slight advantage to less-experienced developers. The difference between a great developer and an average developer is efficiency, he said. Generative AI makes less experienced developers faster and more confident in the code theyve created almost as if its being validated by someone else. Less experienced developers will become good developers faster, and youll be able to give them bigger assignments because they can test more effectively.
Sonars Gaudin sees the bigger advantages accruing to more experienced coders. From a productivity point of view, I have no doubt it will be a great productivity tool for senior developers; I have doubts it will be a great tool for less experienced developers, he said.
Gaudin believes software engineers who lack the skills to analyze code in depth are more likely to make reckless decisions with generative AI output than their more experienced colleagues. If you choose to use code as is, you’re going to have a car crash maybe two times out of three, he said. We might push stuff we never should have pushed into production.
Ikigai Labs Shah sees junior developers getting a slightly bigger boost but said nothing compensates for the superior experience of a senior coder. Some of the quality will be normalized so not-so-great programmers will be at the level of good programmers, he said, but the top remains at the top.
Generative models populated with an organizations existing code base will make inexperienced developers productive more quickly and benefit their more senior colleagues upstream, said Brandon Jung, vice president of ecosystem at Tabnine Ltd. The advantage to the code ninjas is that they will get less code they dont understand, he said.
Amplitudes Reeve sees less-experienced developers being able to accelerate the learning curve but with the tradeoff that lack of mastery of the fundamentals could bite them later. You’re going to get a lot more people who can create a lot more value, but then they’re more likely to be stranded if the tide rises, and they have less attachment to the fundamentals because they didn’t have to learn them the hard way, he said.
Closing the skills gap
By various estimates, there is a shortage of between 1 million and 1.5 million software developers in the U.S. alone. The U.S. Bureau of Labor Statistics expects demand for software developers, quality assurance analysts and testers to grow 25% over the next decade, much faster than the average for all occupations and far beyond the number colleges and universities can supply.
When the telecommunications industry confronted a similar quandary of a shortage of telephone operators 75 years ago, it responded with direct dialing, effectively making everyone an operator. Could the same happen in IT?
Most experts dont think so. Although Gartner has predicted that 80% of technology products and services will be built by non-technology professionals next year, the range of applications that average business users can tackle will likely remain limited to personal and group productivity functions behind the firewall, they said.
Gen AI foundation models are trained to be generic, said Red Hats Griffin. Gen AI may get citizen developers across some baseline, but well see that people cant drive core business value with generalized models.
The technology will absolutely ease the building of software and more unique software, said Tabnines Jung. That said, citizen developers are less likely to understand the nuances of testing and debugging. Developers are still required to understand the code and act as a double check, he said. Building with non-compliant code is a risk, as is the introduction of bugs, especially if the code is not part of a robust system development lifecycle.
Elasticsearchs Riley also sees more software development being done inside business units as the barriers to streamlining back-office processes fall. Basically, every discipline inside a company has a place for software development teams, but few companies invest in software teams for their business operations, he said.
Informaticas Thompson suggested that generative AI adoption may even reduce the need for citizen development by eliminating cumbersome user interfaces. Tasks that require creating screens, buttons and fields will be accomplished instead with natural language commands. Its not that the business users will do less development; its that the need for development at all will disappear, he said.
However, Gartner Analyst Jim Scheibmeir warns of the potential for applications built by business users to proliferate out of control. Employee confusion may rise as there becomes yet another app built internally by a partnering team, he said. It could lead to a digital backlash as citizen developers wield their creativity and innovation but flood the cognitive load from yet another tool change.
The more likely scenario is that generative AI simply moves developers up the value chain to a more strategic position. If you think your job is to write code, you dont understand what the job of a software developer is, said Jodie Burchell, developer advocate for data science at JetBrains s.r.o. The job is to solve problems and the tool is the code.
Over time, the role of the professional developer will evolve into being more about the higher-level tasks machines cant do, said Tabnines Jung. Theyre just pattern matching but they cant reason. A lot of computer science education has been focused on syntax. In the future, itll be more about the context than syntax.
While some experts believe AI could ease the shortage in some areas, history doesnt support the premise that large numbers of jobs will be eliminated. The World Economic Forum has estimated that while workplace automation will displace 85 million jobs, it will create 97 million new ones. Its rare that these innovations reduce the need for people, Thompson said.
The risk of chaos
Next to security concerns, the most common reservation about AI adoption business leaders have is the risk of bias, errors or hallucinations, in which the model generates outputs that are incorrect, nonsensical or not aligned with the data it was trained on.
The concerns are valid. Last August, Stanford researchers published a paper documenting precipitous declines in ChatGPTs performance on certain tasks over several months because of model drift, or a decay in performance over time due tochanges in the input data. In April, researchers at Endor Labs Inc. tested ChatGPT 3.5s success at identifying malicious code in a corpus of 1,874 samples. The chatbot identified 34 artifacts as malicious, but a manual assessment found that 20 of them were either false positives or contained harmless obfuscated code.
The experiments suggest that LLM-assisted malware reviews with GPT-3.5 are not yet a viable alternative to manual reviews, concluded Security Researcher Henrik Plate.
Some risk is inherent in the LLM architecture, said Chris Hughes, chief security advisor at Endor Labs. The models don’t know what they don’t know, he said. They’re essentially trained on existing knowledge from across the internet and other repositories of data. Erroneous data and information covered by copyright or restrictive licensing can make their way into the model.
And sometimes the models just get it wrong. Amplitudes Reeve recently spent hours trying to figure out why a whiteboard application written by generative AI didnt work. It turned out it had gotten one character wrong: > instead of <, he said. It was hard to track down because it looked correct.
JetBrains Burchell tested ChatGPTs performance at generating data frames, which are two-dimensional tabular data structures commonly used in data analysis and statistics, using the well-established Pandas Python library and the new Polars library that was released this year.
The Pandas data frame came back with absolutely flawless code, she said. When asked to do the same thing in Polars, the AI model gave me the exact same code but just changed the import name from Pandas to Polars. The code didn’t work at all, she said. It was a straight-up hallucination.
In an enterprise context, there are particular concerns about the potential for generated code to contain copyrighted material or for developers to inadvertently train LLMs with proprietary data and intellectual property.
When organizations or developers fine-tune LLMs for specific tasks, they often use proprietary or confidential data to improve model performance, said ThoughtSpots Arora. You must take care to ensure that such data is properly sanitized and any sensitive information is removed before fine-tuning. If you give a lot of your software to a public learning system, you risk making it part of the public model.
Endor Labs Hughes called unintended data disclosure during the training process one of the biggest concerns of organizations that are starting to adopt them. Organizations need to establish policies and processes around what’s approved, what’s not approved and what kind of use cases we allow and disallow, he said.
The risk is particularly high in the training stage. Any AI can drift if the data its being given is always brand-new, said Red Hats Griffin.
New technology always has unintended consequences. Amplitudes Reeve has lately been musing about one: the possibility that generative AI could slow down the adoption of new technologies by making older ones so easy to use.
That’s what makes these radical new innovations so intriguing. We know theyll trigger massive changes, but we havent a clue exactly what those changes will be.
Your vote of support is important to us and it helps us keep the content FREE.
One-click below supports our mission to provide free, deep and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.