The short answer is YES: cybersecurity professionals still need programming, scripting, and coding skills even though artificial intelligence (AI) can write programs for us. For simplicity, I'll use 'coding' throughout this article to encompass all three, saving a few keystrokes along the way. With that clarification out of the way, let's dive deeper into our main inquiry. This article aims to evaluate whether the required level of coding expertise has shifted from what it once was. We'll explore the enduring necessity of coding skills for cybersecurity professionals and also venture into a hypothetical future where such skills may no longer be a prerequisite. Expect this article to raise more questions than answers, veering more towards predictions and curiosity than hard facts. Nonetheless, this topic is not only intriguing but also pivotal in forecasting the future of how we secure technology.
Essential Need for Coding Skills: Despite AI advancements, cybersecurity professionals still require programming, scripting, and coding skills.
Five Key Reasons for Coding Proficiency:
Generative AI and Data Security: AI tools like ChatGPT pose risks of unintentional data leaks, making it imperative to handle sensitive code without the use of AI.
Limitations for Offensive Security: AI's ethical constraints limit its use in offensive security tasks, necessitating human coding for creating custom exploits.
AI's Inability to Handle Large Projects: AI struggles with large-scale coding projects, underscoring the need for human coding expertise.
AI's Limitations as a Debugger: While AI can find minor bugs, it lacks the ability to think creatively and solve complex coding problems.
Complementary Role of AI in Coding: AI's coding skills are helpful but not a substitute for human problem-solving and innovation in coding.
Current and Future Role of Coding Skills:
AI handles simple tasks, but a deep understanding of coding remains crucial for tackling complex challenges in cybersecurity.
The field is unlikely to be fully automated soon, so ongoing skill development in coding is essential.
Predicting AI's Automation of Coding:
Complete automation of coding by AI is uncertain, with industry predictions ranging from 10 to 30 years, if at all.
Emphasis on the importance of adapting to and embracing AI as a tool rather than fearing job obsolescence.
Conclusion: The Collaborative Future of AI and Cybersecurity:
The future lies in the synergy between AI and human skills, transforming cybersecurity roles towards more strategic, innovative thinking.
Viewing AI as a partner rather than a competitor in the quest to advance and secure cyberspace.
5 Reasons Why Cyber Pros Still Need Coding Skills
In cybersecurity, coding skills are not just beneficial but often a requirement. For those still contemplating which coding skills to hone, look at my Hack The Box (HTB) article, '7 of the Best Programming Languages for Cybersecurity (Offensive & Defensive)'.
But here's a burning question: Why do we still need coding skills when AI can write code for us? It's a conundrum that many, myself included, have pondered. Having coded complete programs both before and after the rise of generative AI, I've witnessed firsthand how these technologies have evolved. Despite these advancements, my experience leads me to a firm conclusion: To excel in our roles as cybersecurity professionals, understanding the nuts and bolts of coding remains crucial. Let me walk you through the top 5 reasons why coding skills are still indispensable in our field.
1. Generative AI Can Leak Source Code
A significant concern with generative AI tools like ChatGPT is their data usage. When you interact with these AIs, your input contributes to their training, potentially leading to data sharing beyond your control. Picture this: Person 1 shares data with the AI, which then retains this information. Later, Person 2 asks a related question, and the AI might inadvertently reveal aspects of Person 1's data. This implies a crucial caution: treat your interactions with AI, including ChatGPT, as non-confidential. It's unwise to entrust AI with sensitive or secret information.
Now, I'm not outright accusing companies like OpenAI of deliberately designing their AI to function this way, though it's not entirely implausible, especially considering big tech's history with user data and advertising. It seems more likely, however, that this is an accidental byproduct of how these Large Language Models (LLMs) operate.
A case in point occurred in April 2023 when Samsung experienced a leak of its proprietary source code, which was attributed to employees inputting the code into ChatGPT. The intricate details of this incident are beyond this article's scope — let's stick to the basic dynamics I've outlined. The key takeaway is that such leaks do happen and pose a real risk, particularly when dealing with proprietary code.
For us in cybersecurity, handling proprietary code, whether for our employer or a client, involves a high degree of trust and confidentiality. The temptation might arise to use ChatGPT for code reviews, looking for bugs or vulnerabilities. However, as ethical professionals, this isn't a path we can take. Given the inherent risks of AI in this context, we must lean on our actual coding skills for such critical tasks.
2. Offensive Security Professionals Can't Fully Utilize AI
As a penetration tester and ethical hacker, my task is to unearth vulnerabilities and weaknesses in networks and systems before criminal hackers can exploit them. This job often involves using the same type of malicious malware and hacking tools as those employed by cybercriminals. The key difference? I have authorized access to break into these networks to strengthen security, not exploit them for harm.
Creating custom exploits tailored to each client's unique environment is a core part of my work. It's intricate and demands a significant investment of time and effort. Naturally, like anyone else in the workforce, I'm on the lookout for ways to streamline my tasks. Generative AI, with its ability to automate repetitive tasks and devise solutions, is a promising ally. However, there's a catch.
The major hiccup is that most tasks in offensive security are, by nature, malicious. OpenAI's policies are designed to prevent ChatGPT from being misused for harmful purposes. Regardless of whether I specify that I'm an ethical hacker with authorization or a student learning in a controlled lab environment, ChatGPT is programmed to avoid assisting in any activities that could potentially lead to harm. It's a safeguard to prevent misuse, aligning with efforts to secure the digital world.
This ethical boundary set by AI creators means that offensive security professionals like myself can't rely on AI assistance for tasks like exploiting systems. We must still depend on our core coding skills to develop our scripts and malware for ethical hacking purposes. It's a reminder that human ingenuity and technical skills remain irreplaceable in offensive cybersecurity.
3. AI Can Only Write Small Programs
In November 2023, I embarked on a project to enhance the user-friendliness of 'ntfy' - a tool designed to send push notifications to another device from your terminal. While the original 'ntfy' wasn't my creation, I developed an additional layer to streamline its functionality, reducing the keystrokes needed for its operation. I also called this enhancement 'ntfy' in homage to the original. Also, ntfy is only four letters, which means you can send a notification from your terminal to your phone in only four keystrokes. A very cool tool that I'm quite proud of. The only issue is that I don't like distractions while working, so I typically don't have my phone in my office while I work. So, I really built the tool for others more than myself.
In this endeavor, ChatGPT was my go-to assistant for efficiency. It played a significant role up to a certain stage of the development process. However, I hit a roadblock with AI assistance when the project scaled up. Despite the entire tool comprising a modest 391 lines - small by many standards - it was beyond ChatGPT's capability to manage effectively. This limitation forced me back to relying on my coding skills to complete the project.
This experience underscores a critical reality: for larger-scale projects, particularly those sprawling across numerous files and containing extensive lines of code, fully relying on AI for development is not yet practical. While AI tools like ChatGPT have showcased a fantastic ability in handling entry-level coding tasks, they fall short in more complex, voluminous programming endeavors.
Looking ahead, I foresee a time when AI's capabilities in software development will expand. However, this future seems a distant horizon, not an imminent change. Developers, for now, can rest easy. Their skills and roles are not on the brink of being automated out of relevance. The nuanced, intricate art of coding, especially for larger projects, remains firmly in the domain of human expertise.
4. AI Isn't A Fool-Proof Debugger
In my journey with ChatGPT, I've written numerous scripts with its assistance. However, it's a recurring theme that ChatGPT sometimes gives the all-clear on code that's not functioning correctly. In these instances, it's back to basics for me, relying on my own coding know-how to untangle the issues. Sure, ChatGPT can spot a range of minor bugs, but it's not infallible – it can overlook the simplest of errors, much like a human coder might.
But here's where the paths diverge between AI and human coders: thinking outside the box. ChatGPT, for all its capabilities, has its limits in problem-solving. It might make several attempts to rectify an issue, but there's a threshold beyond which it won't cross. In contrast, a human coder can persist through numerous failures, driven by creativity and determination to find a solution.
So, for now, cybersecurity professionals and developers can't always lean on AI in those critical moments of debugging. We have to fall back on our own expertise and intuition. Yet, I'm optimistic about the future. Consider the vast amount of code that AI has been training on since its inception. It's not too far-fetched to imagine a time when AI will become a much more adept debugger. It's good as of now but not fully there yet – not as great as it could be.
As we navigate these limitations of AI in debugging, it's intriguing to consider how this technology will evolve. Will AI ever match the nuanced understanding of a seasoned programmer? This leads us to examine AI's broader role in coding.
5. AI Coding Skills Are Good, but Not Great
While AI has demonstrated proficiency in certain areas of coding, it's crucial to highlight that its skills are more complementary than substitutive. AI can efficiently handle straightforward, well-defined tasks and provide coding suggestions that can save time. However, when it comes to understanding the complex, often nuanced requirements of larger, more intricate projects, AI's capabilities are still in their early stages. It cannot grasp the full context and subtleties that a human coder inherently understands.
Moreover, the creativity and innovation that human coders bring to the table are unparalleled. When faced with unconventional problems or innovating new solutions, human coders can navigate uncharted territories, which AI is far from achieving at this point. AI's coding skills, while helpful, are not yet at a level where they can replace a seasoned programmer's deep, intuitive understanding and creative problem-solving ability.
AI is a valuable tool in the coding arsenal. However, it's not a silver bullet for all programming challenges. For now, the blend of AI's efficiency and human ingenuity is the ideal mix in cybersecurity and development.
The synergy between AI and human creativity hints at a future where coding isn't just about writing lines of code but also about understanding how and when to leverage AI effectively. As we continue to explore this partnership, the question remains: how far can we integrate AI into the art of coding without losing the essential human touch that drives innovation?
What Level of Coding Skills Do Cyber Pros Still Need?
Having established that coding skills are still necessary in cybersecurity, we are left to ponder: With AI handling simpler tasks, do we really need to be as proficient in coding as before?
From my perspective, the answer isn't simple. Yes, AI has revolutionized how we approach certain aspects of cybersecurity, particularly by taking over more mundane or straightforward coding tasks. This shift has undoubtedly made some aspects of our jobs easier, saving us valuable time. However, this doesn't mean we can downplay the importance of coding skills. It might be more crucial now than ever.
The complexity and ever-evolving nature of cybersecurity mean that the field is far from being fully automated. As advanced as it is, AI still falls short in tackling the more intricate, nuanced challenges that are commonplace in our field. It acts as a tool that allows us to bypass the basics and dive straight into the complexities. But here's the catch: grappling with the 'hard stuff' without a solid understanding of the 'easy stuff' becomes exponentially more difficult.
Thus, my advice to fellow cybersecurity professionals is straightforward: continue honing your coding skills. Embrace the fundamentals and keep abreast of new developments. The cybersecurity landscape is one where AI and human expertise coexist, and our strength lies in our ability to navigate both realms. The coding lessons you take today and the programming fundamentals you learn are not just for the present — they are investments in your future ability to adapt, innovate, and excel in a constantly changing field.
AI may streamline certain processes, but the core of cybersecurity still relies heavily on human insight, creativity, and technical know-how. By maintaining and enhancing our coding skills, we ensure that we are prepared not just for today's challenges but also for tomorrow's unknowns.
When Will Coding Skills Be Fully Automated By AI?
Posing the million-dollar question: when will AI fully automate coding skills? In 2024, predicting this with pinpoint accuracy would be akin to a lucky guess at best. It's a topic about gradual evolution rather than an overnight revolution. If and when it happens, the transition will likely be a subtle shift rather than an abrupt turn of events.
The range of predictions within the industry is broad. Some of my peers speculate it could be as soon as 10-20 years. Others believe it might never happen. The idea of complete automation within a decade seems far-fetched to me. Yet, if it were to occur in 30 years, I wouldn't be as surprised. The uncertainty makes it a challenging forecast, and the truth probably lies somewhere in that broad spectrum of predictions.
However, this uncertainty shouldn't deter learning and honing coding skills. The potential of AI to replace certain aspects of our work — or even our jobs entirely — is a broader existential question that applies to many fields, not just coding and cybersecurity. It's essential to remember that such a shift could open up new avenues and opportunities. If AI takes over our routine tasks and chores, it could free us to engage in more creative, fulfilling, and intellectually challenging endeavors. This possibility could herald a new era where human innovation and AI efficiency coexist, pushing the boundaries of what we can achieve as a species.
Therefore, rather than losing sleep over the fear of becoming obsolete, we should embrace the exciting prospect of AI as a partner in progress. It's about shifting our focus from mundane tasks to engaging in pursuits that truly challenge us and bring about breakthroughs for humanity. The future, bright or dark, is ours to shape, and our interaction with AI will be a defining factor in that journey.
Navigating the Future: Cybersecurity's New Era
As we embrace the intersection of AI and human expertise in cybersecurity, we're not just witnessing a change; we're part of a transformation. This evolution challenges us to rethink our roles: from technical experts to strategic innovators. The true value lies in our creativity, ethical judgment, and ability to adapt – qualities that AI cannot replicate. The advent of AI isn't a threat to our skills but an opportunity to augment them, freeing us to tackle more significant challenges and make more meaningful contributions.
In this future, the synergy between our human ingenuity and AI's capabilities promises a richer, more effective approach to cybersecurity. It's a journey filled with potential and promise, urging us to grow alongside our technological counterparts. As we forge ahead, let's view AI not as a rival but as a partner in our continuous quest to secure cyberspace with innovation and insight.