When it comes to AI replacing humans, the public seems to think graphic designers will be the first to go.
Image generators like DALL-E 2 create the most visible—and therefore shareable—proof of what AI can do. How can you look at a photorealistic depiction of dogs playing poker, generated on the fly, and not say “whoa”?
To people who believe a graphic designer’s job is making pretty pictures in Photoshop, it looks like AI is on the cusp of total industry domination.
To actual graphic designers—most of whom know better—the promises and perils of AI and graphic design are less clean cut. It’s true that AI can effectively generate images, and even handle more complex tasks, like creating graphic layouts or editing photos. But it’s also true that current AI falls far short of the entire package of skills a human designer offers.
What can you do to keep up with AI and compete with it, while still using it to your advantage? How will AI affect the graphic design field in the future? More on that in a moment.
But first: a look at how AI graphic design works—plus what it does well, and what it doesn’t.
The dawn of graphic AI
The casual observer might be forgiven for believing AI started creating images in Summer or Fall of 2022. That’s when AI image art really hit its stride: DALL-E 2 made its services widely available, and the results went viral on social media.
Before then, folks were turning their photos into Renaissance portraits, which was about as advanced as AI image generation got. Seems quaint now, doesn’t it?
But the history of AI and graphic design goes back much further than 2022. It’s beyond the capacity of this article to take a really deep dive into how AI and AI art evolved, but its history stretches back further than you might expect.
On Medium, designer Keith Tam shares how, in 1966, Ken Garland suggested in his Graphics Handbook that computers might come for designers’ jobs:
Even in 1966, when a typical computer occupied a small room, Garland recognized that, thanks to the advances of technology, the tasks with which a graphic designer “expects to be confronted” would change.
He was right. In Garland’s time, before home computers, the tools of the trade included rubber cement, technical pens, and scale rulers. Nobody was confronted with the task of using Photoshop.
The day-to-day, hands-on process of graphic design changed drastically in the next 25 years thanks to computers. With the advent of effective AI graphic design tools, what will the next 25 look like? And how can you prepare?
At this point, it’s a good idea to stop and look at how AI does—and doesn’t—work.
How does AI art work?
Even if it isn’t always perfect (see: the hands and teeth problem), it’s hard to look at artwork AI has created and not feel at least a faint tickle of awe.
It’s the work of a computer, after all. Computers: Sometimes they freeze, and you have to turn them off and then on again to make them work. If you spill coffee on the keyboard, it’s a disaster. And, every 4 or 5 years, each of them seems to need replacing.
And yet, with the input of a few words, a computer can dream up a seemingly original piece of art. How?
Artificial narrow intelligence (ANI) vs. artificial general intelligence (AGI)
AI comes in two flavors: Artificial General Intelligence (AGI), and Artificial Narrow Intelligence (ANI.)
AGI is a computer—or a network of computers—that thinks and learns like a human. It broadly mimics intelligence as we understand it, asking questions like, “What is… love?” and calling the scientist who invented it “Father.” AGI is still science fiction.
Turns out, there’s a certain je ne sais quoi to the human mind machines currently can’t capture. A lot of very clever people are trying to change that—they’d love to have their own C3PO or Data to hang out with. But true AGI is a long way out.
ANI does exist. ANI is trained to do one thing well, like write responses in a chat window, or generate images. ANI is what we talk about when we talk about AI graphic design.
How does an AI image generator work?
The ANI that powers an image generator is built using neural networks. A neural network is structured like a very rough approximation of a brain, with many nodes able to connect and disconnect from one another.
Neural networks need training in order to learn how to complete tasks. This training is overseen and managed by humans, and usually involves a lot of tweaking along the way. For most AI—including tools like ChatGPT, as well as image generators—the training consists of looking at huge amounts of data and inferring patterns from it.
For the task of image generation, this involves feeding a neural network millions of images from the internet, along with their accompanying metadata (file names and captions.) The neural network finds correlations between words in the metadata and the contents of the image files, and creates rules from them.
For instance, the neural network might scan 10,000 files named “hippo.jpg.” As it does, it looks at the contents of the image files, finds commonalities between them, and uses that information to create rules about how a hippo looks.
This approach isn’t perfect. For instance, when researchers first started training neural networks on images from the internal, lolcats were in their Golden Age. Many pictures of cats the neural network was trained on featured blocky, white, overlaid text.
After being trained, when researchers tried getting the AI to generate images of cats, a lot of the images it created included random bits of text. The neural network believed that, in addition to whiskers, pointy ears, and cute little paws, cats were made of letters.
Gains made with GAN
You may have heard the term GAN, which stands for Generative Adversarial Network. It was a model for AI dreamed up in 2014 by doctoral student Ian Goodfellow, allegedly while drinking in a Montreal pub.
Once AI projects started implementing and fine-tuning GAN, the quality of AI image generation rapidly improved.
The GAN approach pits two neural networks against one another. One neural network, the generative network, is trained with a set of data—typically, images from the internet, as explained above—and uses it to generate images based on prompts. The other network—the discriminative network—evaluates the results, determining which images are most likely to find human approval.
How diffusion cuts through the noise
By now (2023), GAN is old news. Diffusion is the hot new generative model on the scene. In many cases, diffusion has outcompeted GAN in terms of accuracy, and it’s the technology currently powering DALL-E 2.
Fundamentally, diffusion models learn how to generate images by taking training images, converting those images to visual noise, and then—through repetition and observation—learning how to do the same thing backwards. That is, they learn how to take noise, and convert it into original images.
With a lot of practice, an AI using the diffusion model learns to recognize patterns in how certain types of images convert to certain types of noise, and uses that information to generate original images based on prompts.
The power of prompting
Image generators take text prompts from humans and use them to generate images. Vague or confusing prompts create vague or confusing outputs. On the other hand, the more specific and concrete you’re able to make a prompt, the closer the output will be to what you’re looking for.
Prompting is important not only because it steers AI’s output, but because graphic designers who know how to do it are better able to use AI to their advantage.
The most important thing to know about AI prompting is that AI tools are typically literalists. The more specific you can be, the closer you’ll get to the output you’re looking for. As you can see below, there’s a big difference between “hippo cupcake” and “hippo eating a cupcake.”
Graphic design work AI does well
By now, you probably recognize that the quality of an image generator’s output depends on how it’s prompted, and the skill of the person doing the prompting. You may already have some creative ideas about how you can start using image generators to boost your own productivity as a graphic designer.
But the role of AI in the future of graphic design doesn’t start and end with image generators. Numerous AI as a service (AIaaS) companies have popped up offering tools that handle specific design tasks.
Understanding how AI affects graphic design now—which tasks AI can currently handle, and which ones it can’t—will give you an idea of what you’re up against in terms of competition. It should also give you new ideas for how you can use AI to your advantage.
Sketching
AI is pretty good at taking rough sketches made by humans and guessing what they’re trying to illustrate. A number of AI tools will take your hand drawn input and turn it into something more polished.
That type of tool can be useful if your illustration skills are weak but you need sketches to communicate with clients or take notes. For instance, check out AutoDraw. As you sketch, it will quickly suggest images to replace your input.
Basic logos
For a while, freelance designers who were willing to churn out logos en masse on platforms like Fiverr could earn themselves a decent revenue stream.
Those days are fading. There’s a plethora of AIaaS tools offering to quickly produce original logos for businesses. You’re unlikely to find big ticket clients going that route; but for your average local landscaper, hair stylist, or vehicle detailer who just wants something to put on their business cards, AI-generated logos are a great deal.
Web design
Even if you web design doesn’t fall in your wheelhouse, it’s smart to keep an eye on how AI is changing the game. Business owners who may once have hired a web designer to create a site for them are increasingly turning to tools like Durable’s AI website designer, which is able to generate a custom tailored site in seconds.
Palettes
When it comes to brainstorming palettes for a project, AI has the market cornered. Tools like Khroma help you generate, modify, and save palettes based on a few simple inputs. (In Khroma’s case, you select your 50 “favorite” colors, and the AI puts together a selection of two-color combinations for you
It’s hard to see how a tool like this would put graphic designers out of their jobs. When was the last time a client came to you looking for a color palette? But it can definitely help get the juices flowing when you’re just getting a project off the ground.
Image enlargement/enhancement
Companies like VanceAI claim their tools can upgrade image resolution and improve quality using advanced AI. The results aren’t too shabby:
Whether you could manage something comparable in a few minutes using Photoshop is beside the point: AI enlargers and enhancers are fast and, in the case of VanceAI, handle batch processing. Next time a client dumps a pile of photos from their flip phone on you and asks you to turn them into a print brochure, it could be AI that saves the day.
Product shots
If you’ve spent any amount of time editing the background out of product photos, you know what an onerous task it can be. Luckily, neural networks don’t get bored. (As far as we know so far.)
Tools like Remove.bg will take product photos, remove the background, replace it, and deliver the final package to you—like a Magic Wand tool that’s actually magic. If you work with a lot of ecommerce clients, it has the potential to be a massive time saver.
Stock models
AI makes fake humans. That is, AI image generators are being used to create photos of people who don’t exist.
Whether you find it awe-inspiring or downright creepy, there’s no denying the usefulness of being able to generate a totally random headshot on a whim—with no need for stock photo licenses or signed release forms.
Like AI-generated colour palettes, face generators aren’t likely to poach your clientele. But they could help you with a variety of tasks, from rough mock-ups to finished products for clients.
Graphic design work AI doesn’t do well (yet)
The future of graphic design depends upon which jobs AI will be able to handle on its own, and which will require lots of human input. Recent improvements in AI output are impressive, but neural networks still fall short when it comes to a lot of the day-to-day work handled by graphic designers. Here are some areas where AI has yet to make serious inroads.
Packaging
Some experiments have yielded passable packaging designs created by AI, but there’s no AIaaS that’s really crushing it at the moment. There are so many variables at play when it comes to packaging—like materials, or display and shipping needs—that AI is not yet equipped to handle it from start to finish.
You may be able to use AIaaS like Designs.ai to put together a label for a standard sized package or container; anything beyond that takes human skill.
Visual identity / brand books
Some AIaaS tools promise to take your palette, typefaces, visual elements, and copy, throw them in a blender, and create a brand book or a complete set of assets for a marketing campaign. But creating a comprehensive package like that takes a lot of consultation with clients, a lot of back-and-forth, and fine-grained attention to detail.
For a mom-and-pop restaurant looking for new menus and a prettier website to match, AI might do the trick. But bigger clients expect more.
Motion design
When it comes to motion design—creating animations and other moving assets—AI still lags behind. That’s not to say AI-assisted motion design tools won’t pop up in the future; rumor has it GPT-4 may have some form of video capability. But for now, motion design falls firmly within the realm of human expertise.
Environmental design
The physical scale of assets used in environmental design, not to mention the complexity of working with a real-life, human-navigable space, puts it out of the reach of current AI solutions.
High definition original art
Even image generators like Midjourney or NightCafe, which give living and breathing illustrators and designers a run for their money, are limited in usefulness by the sizes of the images they output.
For instance, NightCafe charges 3 credits to output a 0.8 megapixel image (896 x 896). If you’re looking for high resolution art to be adapted across multiple digital platforms, or even something you could use as a print asset, you’re out of luck.
At the moment, the ability of AI to overtake professional illustrators is held back by the huge computing cost of creating high resolution images.
AI graphic design and copyright
As of early 2023, AI-produced work and the finer nuances of copyright law are a big question mark.
Until some clear precedents are set, it’s best to follow these guidelines when using AI to help create work for clients:
- Make sure whatever AIaaS tool or image generator you’re using explicitly states that its output may be used for commercial purposes
- If you deliver unmodified AI-generated work to clients, be clear and upfront about its origin
- For the time being, assume that any unmodified work you create with AI cannot be copyrighted
- Keep an eye on the news. Court cases relating to AI and copyright are just beginning to emerge, and the field is changing quickly
Bottom line: Don’t let fears about copyright stop you from experimenting with AI, but be cautious and transparent about how you use it, and don’t assume you’ll be able to copyright anything you create.
(It should go without saying, the above does not constitute legal advice, etc.)
What you need to know to keep up with AI
Now that you’ve had a chance to look at what makes AI tick and understand both its strengths and weaknesses, it’s time to acknowledge the (many-toed, slightly “off” looking) elephant in the room: competition.
It would be naive to assume that potential clients won’t begin turning to AI to handle work typically assigned to copywriters. But it would also be naive to assume there’s no way to compete against AI graphic design—or even use it to your advantage.
Skill sets to help you compete
Two skill sets can help you compete with AI: Originality, and the so-called “soft skills” of client management. Developing these skill sets to the highest standards possible and marketing them as part of what makes you unique as a designer will help you stand out from the AI pack.
Originality
AI-generated art and design assets are unoriginal by definition. AI works by looking at lots of examples, then trying to mimic them.
For some, that may seem like a rough approximation of how humans learn to create as well. But as an embodied, living being, you have a deep, unplumbable well of memories, sense experiences, and emotions to draw upon. You’re uniquely able to create work no existing AI can.
Look at AI-generated design as a challenge to you to fully draw on your individual creativity. When neural networks are able to churn out predictable brand logos and monotonous digital airbrush art ad nauseum, graphic designers are no longer able to run on cruise control and get by. Mediocrity won’t cut it. So, challenge yourself to come up with work that surprises you and your clients, and make it a highlight of how you market your services to the world.
The “soft skills”
Art generators are good at taking instruction in the form of prompts, and bots like ChatGPT can hold up their own end of the conversation. But when it comes to graphic design, they aren’t able to understand clients’ needs as quickly and intuitively as a real human.
Whether you’re working freelance or in-house, you know you always have to be ready to adapt to and work with clients’ demands—no matter how frivolous, short-sighted, or downright silly they may be.
Many otherwise kind, caring, and competent humans have serious issues describing what they want. Luckily, when you are also a human, you have lots of pre-programmed intuitive power working in your favor. When your client tells you they like a design, but they’d like it to “pop” a bit more, you probably grit your teeth. But you probably also know what types of adjustments you need to make so your client feels the promises of that three letter word have been fulfilled.
Don’t let yourself fall into the trap of narrowing client communications to a bare minimum or providing robotic responses to their requests. As AI develops further, it will be your soft skills—your soft, messy, human skills—that help you work effectively with clients, and differentiate you from the robots.
Skills to help use AI to your advantage
Whether you’re already signing up for every AIaaS tool you can find, or you’re still making your first forays into the world of image generators, you’re on the right track. Using AI to your advantage means offloading repetitive, uninspiring work to machines so you can focus more on the work you enjoy. Here are two concrete skills you can start working on now to help that happen.
Prompt engineering
Thinking about the future of graphic design jobs, it’s hard to imagine one where AI prompting isn’t a key competency for designers across many different fields.
Coming up with effective prompts for AI is an art unto itself, a field known as prompt engineering. Learning the rudiments of prompt engineering, and how to write prompts that get you the output you’re looking for, is essential if you plan to use AI to aid your graphic design work.
One good place to get started is Leoni Monigatti’s beginner’s guide to text-to-image prompting. The quickest and most effective way to get a handle on prompting is to study resources like Monigatti’s guide, and then begin practising using whatever image generators you have at your disposal.
Regular touch-ups and editing
AI’s graphic output is often just shy of “good enough.” That applies as much to more complex tasks, like generating graphic layouts or resizing and cropping images for social media, as it does to illustrations spit out by image generators.
Whatever you plan to use AI for, identify the ways it commonly falls short, and create time estimates for the work it takes to make it right; incorporate that extra time when you’re planning your work hours or quoting clients. And try to develop repeatable steps you can follow each time you use AI so you can handle touch-ups as quickly as possible.
The future of AI graphic design
It’s hard to say exactly what changes will come in the AI design field, or what the future of graphic design jobs holds, but it helps to look at the growing capability of AI as a whole.
Current viral AI stars like ChatGPT used GPT-3 as their starting point. (You can learn more about GPT-3 from our article on AI copywriting.) As of this writing, GPT-4 is on the horizon. A few predictions from Alberto Romero, an analyst in the sphere of AI, based on the most recent rumors:
- GPT-4 will be able to accept audio and video input
- It will be much larger than GPT-3 (in terms of the number of connections it uses to generate output), but will use them differently, in a way that’s closer to how brains work (only using a portion of its available “neurons” at any one time)
- It may be able to pass the Turing test (during a conversation with the AI, it will be impossible for the average person to determine it’s an AI, and not just another human being)
What does this mean for graphic designers? Most likely, AI tools will become better at interpreting input from users and anticipating requests. You may find yourself chatting with an AI tool the same way you’d chat with another designer, telling them how and where to make adjustments to certain work, or even whole production processes.
One thing is for certain: AI is here to stay, and so is its influence on graphic design. The sooner you can begin to adapt, the better you’ll be able to benefit from the changes to come.